Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google Advocates 7-Day Deadline For Vulnerability Disclosure

timothy posted about a year ago | from the on-the-7th-day-you-can-rest dept.

Security 94

Trailrunner7 writes "Two security engineers for Google say the company will now support researchers publicizing details of critical vulnerabilities under active exploitation just seven days after they've alerted a company. That new grace period leaves vendors dramatically less time to create and test a patch than the previously recommended 60-day disclosure deadline for the most serious security flaws. The goal, write Chris Evans and Drew Hintz, is to prompt vendors to more quickly seal, or at least publicly react to, critical vulnerabilities and reduce the number of attacks that proliferate because of unprotected software."

cancel ×

94 comments

Sorry! There are no comments related to the filter you selected.

ultérieur motive? (0)

jazzis (612421) | about a year ago | (#43860779)

quick and dirty...

And when they get bitten in the ass? (0)

Anonymous Coward | about a year ago | (#43860811)

Not sure coding works on something the scale of google, but programmers are people and they go on vacation, funerals, get fired, get hired and freeshly acquainted with their jobs too.

Will Google be as supportive of this policy after the first time some major bug hits one of their more minor products and the guy who knows all about it is gone whereever hat week?

Re:And when they get bitten in the ass? (4, Informative)

h4rr4r (612664) | about a year ago | (#43860841)

Why is there only one guy?

How incompetent is the management an organization that does not have enough coverage to deal with those issues?

Re:And when they get bitten in the ass? (1)

gnick (1211984) | about a year ago | (#43860889)

Hewlett-Packard started with only two...

Re:And when they get bitten in the ass? (3, Funny)

anthony_greer (2623521) | about a year ago | (#43860959)

What we call incompetent, newly minted MBA drones call efficiency optimization.

Re:And when they get bitten in the ass? (1)

DickBreath (207180) | about a year ago | (#43863421)

> What we call incompetent, newly minted MBA drones call efficiency optimization.

New and old MBA drones call this bonuses. Look, I did something! I reduced headcount of people who understand our critical systems to only one!

Re:And when they get bitten in the ass? (1)

CaptainJeff (731782) | about a year ago | (#43871179)

One of the key concepts taught in *any* decent MBA program is risk management. For a software development company, having more than one person available to make emergency fixes to code is much cheaper than the cost in not being able to deploy a fix in a reasonable amount of time, so any decent MBA graduate will make sure that there is always a backup person available for his purpose.

Re:And when they get bitten in the ass? (0)

Anonymous Coward | about a year ago | (#43861541)

Must be nice to work for a small company with only one product.

Re:And when they get bitten in the ass? (0)

Anonymous Coward | about a year ago | (#43863021)

There are very small companies out there, with just one or a handful of developers. Management shouldn't need to hire a person not normally needed, just because some researcher can't wait another week or two for the critical person to get home from that vacation trip the poor guy planned months ago. 30 or 60 days isn't an unreasonable amount of time to wait. 7 days is just ridiculous in many instances.

Re:And when they get bitten in the ass? (2)

h4rr4r (612664) | about a year ago | (#43864019)

I disagree.
What would they do if the one dev died?
Then likely even 60 days would not be enough to get his replacement up to speed.

Any company that has employees it cannot lose deserves this.

Re:And when they get bitten in the ass? (5, Informative)

denpun (1607487) | about a year ago | (#43860965)

Seem like they recommending it only for "critical vulnerabilities under active exploitation". For vulnerabilities where exploits increase as each day passes because of non-disclosure, I would want quick notification.

FTA and not quite in the summary:

“Our standing recommendation is that companies should fix critical vulnerabilities within 60 days — or, if a fix is not possible, they should notify the public about the risk and offer workarounds,” the two said in a blog post today. “We encourage researchers to publish their findings if reported issues will take longer to patch. Based on our experience, however, we believe that more urgent action — within seven days — is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.”

Re:And when they get bitten in the ass? (4, Interesting)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43861249)

Seem like they recommending it only for "critical vulnerabilities under active exploitation".

Honestly, I'm a bit surprised that they offer even seven days of cover for vulnerabilities with detected exploits. I can certainly see the wisdom of the "Please, don't release 'proof of concept exploit toolkit, not for use for evil' ten minutes after emailing the vendor about the problem..." appeal; but I'd be inclined to report the discovery of an active exploit immediately, as being a noteworthy event in itself.

Re:And when they get bitten in the ass? (0)

Anonymous Coward | about a year ago | (#43862887)

I think the reality is that it's not always easy to fix the exploit, validate the fix and the impact on the rest of the code within 7 days. Depending on the release model a company uses it can make it even worst. Add 3rd party code to that equation and you are now screwed.

Re:And when they get bitten in the ass? (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43863575)

Don't get me wrong, I agree that they are screwed, it's just that the 7-day window is when black-hats are already known to be using the bug. Under those circumstances, you would be screwed no matter what: the 'disclosure' has already happened among the people who are interested in using it for evil. The only value in a delay by the 'responsible' parties is that it reduces the apparent lateness of your fix.

Re:And when they get bitten in the ass? (1)

phantomfive (622387) | about a year ago | (#43864339)

Exactly. If it's under active exploitation, then you need to let people know about it immediately so they can defend themselves.

Delaying disclosure in that situation does no one any favors, except evil exploiters (including governments).

Re:And when they get bitten in the ass? (4, Insightful)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43861217)

The big kicker is "under active exploitation". If no exploits are known in the wild, it's still necessary to light a fire under the vendor's ass(you can't assume that the flaw isn't just sitting in somebody's high-value-zero-day arsenal, or that it won't be discovered and exploited in the future); but there is a real argument in favor of trying to work with the vendor to get a proper fix in place before releasing the details, and more or less assuring that every dumb script kiddie can implement the attack if they want.

If something is already 'under active exploitation', though, the cat is already out of the bag, and the choice isn't really in your hands anymore. The clock already started ticking. Whether you like it or not, every hour it goes unfixed is more room for more attacks. Keeping quiet about it harms the ability of end users to take protective action, and really only helps the vendor save face, which isn't a terribly valuable feature.

Now, I don't doubt that Google's 'webapps and silent autoupdaters' style gives them a certain self-interested enthusiasm(compared to vendors who cater to much more sedate patch cycles) for fast disclosure; but, again, 'under active exploitation' is the phrase that makes their position(however self-interested) merely realistic. If you know that team black hat already knows about it, you don't really get to choose when it is disclosed, since that has already happened. You only get to choose how slow you make the vendor look.

Re:And when they get bitten in the ass? (2)

TemporalBeing (803363) | about a year ago | (#43864473)

The big kicker is "under active exploitation". If no exploits are known in the wild, it's still necessary to light a fire under the vendor's ass(you can't assume that the flaw isn't just sitting in somebody's high-value-zero-day arsenal, or that it won't be discovered and exploited in the future); but there is a real argument in favor of trying to work with the vendor to get a proper fix in place before releasing the details, and more or less assuring that every dumb script kiddie can implement the attack if they want.

And yet Microsoft's policy is that unless it is "under active exploitation" they won't necessarily fix it. They get lots of notices about potential exploits, but don't fix them, even likely high targets, until someone exploits them - which, by then, is really too late.

Re:And when they get bitten in the ass? (0)

Anonymous Coward | about a year ago | (#43867827)

Not sure I completely agree.

The big thing is that, even if an exploit is being used in the wild, that says little about how widely known or used that exploit is in the black hat community. A zero day exploit has some intrinsic value to the black hat discoverer. They may sell it, give it, or keep it for themselves to use. The longer the vendors do not patch it, or better yet are unaware of the exploit, the better for the black hat. Wider dispersal of the expoit raises it's profile and will lead to patching and closure of the hole.

My gut feeling is that 7 days is too short. Plenty of enterprise vendors will struggle to meet such a timeline, regardless of any other factors. Maybe 30 days might be viable.

Re:And when they get bitten in the ass? (1)

LordThyGod (1465887) | about a year ago | (#43861615)

Not sure coding works on something the scale of google, but programmers are people and they go on vacation, funerals, get fired, get hired and freeshly acquainted with their jobs too.

Will Google be as supportive of this policy after the first time some major bug hits one of their more minor products and the guy who knows all about it is gone whereever hat week?

Huh?

Sounds like a huge risk (4, Insightful)

anthony_greer (2623521) | about a year ago | (#43860821)

What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?

Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

Re:Sounds like a huge risk (2)

maxwell demon (590494) | about a year ago | (#43860881)

Testing? Isn't that what the customers are for? :-)

Re:Sounds like a huge risk (1)

Anonymous Coward | about a year ago | (#43860949)

Exactly this. While the model may work for Google who seems to continuous beta everything as the users aren't generally the ones paying the bill, those who ship software, enterprise or otherwise, 7 days just isn't enough. While certainly vendors (Hello Microsoft!) are abusive in terms of reasonable fix time, 7 days is far too short.

Re:Sounds like a huge risk (5, Informative)

Anonymous Coward | about a year ago | (#43861065)

We're talking about actively exploited critical vulnerabilities.
Fix the hole now! You can make it pretty later.

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43861259)

I don't know why, but that makes me think of Darth Stewie in Blue Harvest :)

Re:Sounds like a huge risk (1)

lightknight (213164) | about a year ago | (#43861633)

if(exploit) {return false;} else {return true;}

Re:Sounds like a huge risk (2)

Synerg1y (2169962) | about a year ago | (#43861673)

That's how you wind up with 5 more holes, no thanks.

Re:Sounds like a huge risk (1)

Anonymous Coward | about a year ago | (#43861895)

You wind up with 5 more holes, 0 of which are being actively exploited. Win.

Re:Sounds like a huge risk (1)

Synerg1y (2169962) | about a year ago | (#43862291)

In the long run it'll cost you A LOT more as they surface one by one.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43869093)

If your development process is that broken I want nothing to do with you. Doing the minimum necessary to fix a security bug is usually trivial.

It's the asshats who think it's okay to make a profit while allowing their customers to be compromised (costing the customers far more than it profits the asshat) that are the real problem. That destroys value and is the antithesis of win-win capitalism.

Re:Sounds like a huge risk (1)

Synerg1y (2169962) | about a year ago | (#43875733)

so thorough QC is broken development process? ...oh gawd, you don't actually work in IT do you?

Re:Sounds like a huge risk (4, Funny)

LordThyGod (1465887) | about a year ago | (#43861681)

We're talking about actively exploited critical vulnerabilities. Fix the hole now! You can make it pretty later.

Yea, but I only do bugs once a month. On Tuesdays. I can't be bothered before then. Your problems may seem big, but I choose to do things my way, at my pace. Besides my inaction helps support a large secondary market for security appliances, IT support personnel and the like. We jeopardize an entire sector of the economy by undermining these people.

Re:Sounds like a huge risk (1)

atom1c (2868995) | about a year ago | (#43860951)

Like @Maxwell demon suggested, why stop at launching full-blown products in beta? Simply release their security patches in beta form as well!

Re:Sounds like a huge risk (2)

SJHillman (1966756) | about a year ago | (#43861061)

I think you have a bug that inserts random "@" symbols into your text. You have 7 days to fix this before I tell the world!

Re:Sounds like a huge risk (1)

atom1c (2868995) | about a year ago | (#43864135)

That remark should have been made as a private message; a public reply qualifies as public disclosure.

Re:Sounds like a huge risk (3, Insightful)

slashmydots (2189826) | about a year ago | (#43861009)

I'm a software programmer so I can honestly say if a company takes more than 7 days to issue a fix, they aren't good. Let's say there's a team of 20 programmers working on a huge piece of software like an ASP system on a website. If the 1-2 people responsible for the module hear about the problem like 4 days after it was reported, the boss seriously screwed up. That's a lack of communication in their company. A 30 minute delay for "there's cake in the breakroom" and 7+ day delay on "someone's hacking our website" means someone epically screwed up the importance of that e-mail getting relayed to the correct people.

If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened, they should be fired. They obviously don't know their own code and didn't use comments or worse yet, they don't know what the command they're doing ACTUALLY do and that was the cause of the problem.

Then if it takes more than 7 days to "publish" or "push" a new version of their software live, then the whole project was designed like it's 15 years ago. These days, you need urgent patches faster than that. Let the programmers who wrote the code do the testing so there's zero delay and then don't require some know-nothing 60-year old head of the department review all code before it goes live.

Re:Sounds like a huge risk (2)

SJHillman (1966756) | about a year ago | (#43861139)

Are you taking into account testing time for software that may be used on thousands of different configurations? In my mind, that would account for the bulk of the time between notification of an exploit and release of a patch. Of course, this is only for critical exploits that are actively being used, so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.

Re:Sounds like a huge risk (3, Insightful)

HockeyPuck (141947) | about a year ago | (#43861235)

so it's probably better to get out a fix that works for 60% of installs right away and then work on the patch that will work for 100% of installs.

So you're willing to risk breaking 40% of your customer's installs? Are you willing to skip regression testing to make sure your fix breaks nothing else?

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861329)

If you have good unit tests it wont be anywhere near 40%. Automated testing is a lot faster than manual regression testing. Assuming of course you are talking about fixing something like Google.com or Java where the cost of failure it relatively low.
If you are fixing pacemakers, you really need to ask yourself why you put a webserver or browser in it to be compromised in the first place.

Re:Sounds like a huge risk (1)

TemporalBeing (803363) | about a year ago | (#43864559)

If you have good unit tests it wont be anywhere near 40%. Automated testing is a lot faster than manual regression testing. Assuming of course you are talking about fixing something like Google.com or Java where the cost of failure it relatively low. If you are fixing pacemakers, you really need to ask yourself why you put a webserver or browser in it to be compromised in the first place.

Even automated regression and unit testing takes time. Even on mid-size projects, it could easily be a few days just to run the automated testing suite in all the supported environments to guarantee you didn't break something. For a large project, it could be weeks or more.

Re:Sounds like a huge risk (1)

fast turtle (1118037) | about a year ago | (#43864441)

Ask Microsoft that question and you'll get a Hell Yes since that's happened in just the last year. Remember the recent patch tuesday that borked lots of systems worldwide? I got caught by that one and it was rated critical by MS (highest they share). Went to reboot and got a BSOD and yes I was suprised because I normally didn't get the updates that early.

Re:Sounds like a huge risk (1)

slashmydots (2189826) | about a year ago | (#43863729)

That barely applies in most real world examples. Oops, special characters are allowed in an input field for social security numbers and post-filtered after match checking so someone can falsely submit a duplicate SSN by adding a pound sign to the end and get verified for multiple accounts that all validated as real SSNs. Simple! Change the order of your code to check the literal text value in the field before filtering or just run the filter sooner. That could not possibly break anyone's system just because they're running Vista or something. It's purely a logical, procedural fix that doesn't affect anything. That kind of stuff doesn't need to be tested.

Now if your MP3 encoding DLL has a problem so you swap it out with a different manufacturer's DLL, now you're asking for problems because that's not a simple logical procedural fix and who knows how that DLL will run on every configuration.

Plus, then there's the huge fact that a temporary workaround that's extremely simple is usually much faster than actually fixing the problem but just as effective at stopping the security issues. Like if it's a big database problem and a ton of stored procedures need to be altered and a database patch to change the structure needs to be issues, etc, but you can put a realtime keypress checker to make sure nobody is able to press the Z key while focused on a certain text field and that prevents the hack, do it. The majority of security problems are that simple to fix.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861247)

>If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened,
I'm working on a code base that is full of potential bugs. I didn't write it, but I get to fix it. Some of the stuff done is so obtuse that it can take days just figure out where all the global values are being set from. Gogo matlab.

>If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened,
Because no one ever quits. Also, good programmers have EXACTLY the same skill set as good testers.

I'm going to go ahead and guess that you don't actually know anything about actual software production and are, at best, a code monkey who's never had a real job.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861363)

A frightening amount of software was designed 15 years ago or more, and the people who wrote it have moved on.

There is also a lot of software that resides in embedded installations that are not easy to update. Where I work, an update would take days to reach the customer even after it is made and tested, because technicians have to fly there first!

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861593)

Let the programmers who wrote the code do the testing so there's zero delay and then don't require some know-nothing 60-year old head of the department review all code before it goes live.

AH yes. Because programmers testing their own code when they're under tight deadlines - the meeting of which determine whether or not the developer gets a bonus/promotion/raise this year - has proven to be a GREAT model everywhere it's been used in the past.

I'm pretty sure, reading your post, that you're simply an opinionated amateur - you're not a "software programmer" by trade.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861885)

The worst security flaw I ever reported needed several thousands of lines of code to be modified. It was in an admin GUI that had been localized improperly, and each non-US localized version need to be fixed.
And the guy who originally wrote the mess was no longer with the company.

Re:Sounds like a huge risk (2)

Actually, I do RTFA (1058596) | about a year ago | (#43862123)

I'm a software programmer so I can honestly say if a company takes more than 7 days to issue a fix, they aren't good.

I doubt there is any company in the world you consider very good. Care to give me a couple. Bonus points if you do the lookups of "longest open critical issue" instead of making me prove they were over 7 days.

A 30 minute delay for "there's cake in the breakroom" and 7+ day delay on "someone's hacking our website" means someone epically screwed up the importance of that e-mail getting relayed to the correct people.

You think it's e-mail relaying? Someone's on vacation? Or working with 3 other people on the feature promised to a customer right away. Or a hundred other reasons why they cannot drop everything.

If the programmers can't read their own damn code that they wrote and figure out why the vulnerability happened, they should be fired... they don't know what the command they're doing ACTUALLY do and that was the cause of the problem.

Wow? So, the bug in the library everyone uses, or a flaw in the compiler, never happened to you? How long have you been programming? It can take a while to realize that the documentation got some subtle feature of the library you're using incorrect. And there's always that issue with documentation, once the libraries get big enough.

Then if it takes more than 7 days to "publish" or "push" a new version of their software live, then the whole project was designed like it's 15 years ago. These days, you need urgent patches faster than that.

It does depend on the software. You seem to think all software is written for the web, and runs on the servers. That's an easier solution (not trivial of course, and you have scale issues). But what if you have a Linux/OS X/Windows XP/Windows Vista/Windows 8 product? On tons of different hardware?

Let the programmers who wrote the code do the testing so there's zero delay and then don't require some know-nothing 60-year old head of the department review all code before it goes live.

Skip QA? Programmers who wrote the code test it? That might work if it's "this function has a buffer overwrite", and the fix is transparent to everyone, but what if the problem arose because the menus were confusing, and people were accidentally reformatting their drive? Are the programmers who thought it was fine in the first place really the best judge of the fix?

Also, have you written an emergency patch? I have, 2 hours before the software went to QA for the final time (the ship deadline was on us, and QA decided new build/old build.) Hell yes, I talked through every line of code with another person.

Re:Sounds like a huge risk (1)

TemporalBeing (803363) | about a year ago | (#43864539)

While I generally agree, some projects (like Qt for instance) take a week or more to test something before making a release to production to verify that what was intended to be changed actually was changed, and that it didn't break anything else. This is especially true of platform API projects like Qt, Gtk, etc. where many people rely on the stability of the APIs in those projects, and people using the project then need to have their own testing time on top of that.

However, it also goes to underscore the importance of making notices about vulnerabilities and how to at least minimize their impact within a quick time frame so that downstream products as well as administrators of existing products can have appropriate notice to (i) minimize their own exposure, and (ii) start planning for integration of new releases of their upstream dependencies so that they can (iii) make their own notices for their downstream customers.

Ultimately it may not be fixed in every location for 6 months or more; but the sooner people can start fixing or minimizing exposure the better for everyone.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43867297)

I'm a software programmer.

Not on software of any complexity and importance, obviously.

Re:Sounds like a huge risk (1)

Anonymous Coward | about a year ago | (#43861183)

Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer. (If you think "I'm a great developer! And that's impossible" then sorry ... you're a shitty developer who doesn't realize it [wikipedia.org] .) Someplace like Google has the resources to fix the bug and do a full build and regression test in that amount of time, and people who don't drool on the keyboard for 6 weeks before getting around to it. So 7 days is a lot of leeway.

Besides, your systems should be architected for this sort of thing; you know vulnerabilities happen, and you should be able to patch any critical system at the drop of a hat. Anything else is irresponsibility (shitty developers).

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43861311)

I'm sorry but you should be able to do this in 24-48 hours tops, even with a large system, or you're just a shitty developer.

That's assuming the vulnerability is trivial to diagnose, and easy to fix. Plus, that doesn't take into account the testing time required, not just for the fix, but for the regression testing too. Remember: writing code is only about 10-20% of the time it takes to build software.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861471)

The rest of the time is drinking free soda while your automated tests run...

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861503)

That's assuming the vulnerability is trivial to diagnose, and easy to fix.

No, it doesn't. It assumes you're going to be wracking your collective brains and drinking lots of coffee for about 16 hours diving into unfamiliar third-party source to debug a problem that's not really yours but you have to fix anyway. If you can't find it in this amount of time, as someone else said, fire your developers, they suck.

Plus, that doesn't take into account the testing time required, not just for the fix, but for the regression testing too.

It also most certainly does. Don't you have automated regression testing? If not, again, you suck. If you can't run your regression tests in 8-12 hours, you probably suck too, and should dedicate more hardware to it. That's assuming a real, sizable system. Most trivial web crap these days can probably be checked in a few minutes.

Remember: writing code is only about 10-20% of the time it takes to build software.

Er, not really, unless you're doing things really wrong. 25-60% of the total man-hours is coding depending on the project, which includes code for automated testing and similar. Documentation, promotion, etc can take the other chunk, and for games, assets, design, etc. eat a bigger chunk.

This is all however utterly irrelevant to fixing bugs. 100% of your time is analyzing and writing code, and the rest (testing and deployment) should be mostly hands-off, though you should obviously be adding tests to make sure said exploit remains plugged.

I suggest seeing how software houses not mired in confusion and ineptitude do these things. It's not a bunch of shoot-from-the-hip cargo-cult web monkeys banging their heads against PHP until something works and then manually running it a million different ways to make sure everything still works.

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43861625)

No, it doesn't. It assumes you're going to be wracking your collective brains and drinking lots of coffee for about 16 hours diving into unfamiliar third-party source to debug a problem that's not really yours but you have to fix anyway. If you can't find it in this amount of time, as someone else said, fire your developers, they suck.

I've had the occasional bug that took a week to track down, purely because it was so difficult to reproduce. And let's not get started on the poor quality of your average bug report...

Don't you have automated regression testing? If not, again, you suck.

How do you do automated regression testing of telematics firmware?

25-60% of the total man-hours is coding depending on the project, which includes code for automated testing and similar.

That I'll let you have :P

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861791)

I've had the occasional bug that took a week to track down, purely because it was so difficult to reproduce. And let's not get started on the poor quality of your average bug report...

This is not about fixing random bugs. This is about fixing exploits which are not poorly-reported, but rather "You can {get root,access any data,modify any data,...} on SYSTEM when you do ACTION". If this wasn't eminently reproducible it probably wouldn't be an exploit to begin with.

How do you do automated regression testing of telematics firmware?

The same way you do regression testing of any other piece of software. There is nothing even slightly unusual in this case. This is always one of the first questions "testing newbies" ask, though: "but how do you test *something-that-seems-hard*". It's not hard. There are tools that let you do almost any kind of testing these days, and common practices for testing things that may at first seem difficult. They aren't. Telematics especially should be very straightforward.

(Poorly-written software can be harder to test, but then crap is crap. Don't make a crappy plane and complain it's hard to keep in the air.)

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43862333)

You make yourself sound like such an expert, yet you post anonymously. I do wonder if you actually know what you're talking about...

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43862679)

You make yourself sound like such an expert, yet you post anonymously. I do wonder if you actually know what you're talking about...

Ah, if all else fails, the ad hominem. (Because clearly posting non-anonymously would make all the difference .. then you would question where I worked, what I did, how long, etc, demanding more and more extensive proof, all in an effort to avoid addressing the actual issue.)

Suffice to say that yes, I am a professional in the field, I do know what I'm talking about, and some of us have some concern about NDAs relating to specific practices of employers. However, anyone familiar with automated testing etc will tell you similar things; go ask, or better yet, learn for yourself and apply the techniques. They're not hard, and once you have tests start preventing bugs from slipping into production, you'll wish you'd done it sooner.

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43862799)

I've worked at places with automated testing, and bugs still made it to production. So stop banging on about it as if it's some sort of miracle cure.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43864355)

Again, we are NOT talking about general bugs. Bugs happen. Testing is not perfect: it's only as good as your tests, and you're almost certainly going to be adding more regularly.

Critical vulnerability fixes however are entirely different: a very specific bug whose fix you want to make sure probably doesn't break anything else. It certainly still might, but your tests will at least help quite a bit. This is not a perfect process, but it can be pretty robust if you do it right, and you can follow up with other fixes as necessary. Making sure critical data is not compromised however should be the top priority.

Re:Sounds like a huge risk (1)

RaceProUK (1137575) | about a year ago | (#43864465)

So now you admit the process isn't perfect, you still think you could get a critical security fix out in 48 hours without significant risk? I know I'd be extremely nervous about such a short timescale.

Re:Sounds like a huge risk (1)

cbhacking (979169) | about a year ago | (#43864515)

I do wonder if you actually know what you're talking about...

I don't. He (or she) doesn't have a clue what he's talking about, not when it comes to security.

If this wasn't eminently reproducible it probably wouldn't be an exploit to begin with.

That's a dead giveaway, even if it wasn't already obvious. Many, many security bugs repro under specific conditions that may be common (or not; it really doesn't matter) on real-world deployments, but don't closely match developer/tester machines (for example, the POC requires having some software installed that the person reporting the issue has but the developers don't, so it never repros for the devs and the researcher doesn't know why. Or, something that only repros on single-core computers would still hit a non-trivial portion of the world, and would have hit a lot more of it a few years ago, but no dev in the last five years would tolerate working on such a crippled box). Or they may be due to a 1-in-65000 chance, which sounds small (and is, when it's one person trying to reproduce it) but when each infected machine is repeatedly attacking every potential target it can find, that's still plenty dangerous. There's lots of other eye-roll-worthy material in that line, too (for example, the suggestion that vulnerability reports are likely to correctly assess the impact of exploitation is not well borne out in reality).

As for automated regression testing, that will catch a lot of the potential issues, but it won't catch all of them and (most tellingly, here) it is very unlikely to catch security issues. Security testing is very different from functionality testing; the difference between "does it work correctly?" and "can I make it work incorrectly?" is huge. Many of the types of security testing that can practically be automated take a very long time to run; a web scanner looking for XSS might finish in minutes or hours (and I've yet to see one that can find as many issues as manual testing can), but a serious fuzz testing pass can easily take at least a week all by itself, and that's assuming that you are ready to start running it (fuzzer is configured, templates are available, infrastructure is ready) as soon as the fix is checked in. Some types of security issue, like TOCTOU, aren't going to be tested for at all unless the test is explicitly designed to check for the possibility (generally speaking, TOCTOU-vulnerable code functions perfectly unless an outside actor - the attacker - intentionally gets into a race with it). Quick and dirty fixes, even if "correct", also tend to introduce lots of side-channel attacks that may only be possible to spot through code review, such as a fix to an authentication system introducing the possibility of a timing attack.

Security is a messy, nasty, and time-consuming business, and the attackers are always the ones with all the time. Writing secure code requires training/domain-specific knowledge that most developers don't have, accepting costs (both in development time and execution speed) that most developers try to optimize away, and avoiding assumptions (one of my favorites is "but the user would never do that!") that are otherwise accurate enough for most development. Security test requires thinking like an attacker (a skill relatively few people seem to have), writing test code that explicitly stresses the unlikely scenarios (the things that normal testing didn't cover), and patience, and at the end of the day all you can do is hope it's good enough.

Re:Sounds like a huge risk (1)

ebno-10db (1459097) | about a year ago | (#43864775)

You lose. Don't you know that on Slashdot being a bombastic ass trumps actually knowing what you're talking about?

Re:Sounds like a huge risk (1)

wierd_w (1375923) | about a year ago | (#43864825)

Re: poor quality bug reports

A good deal of the problem there could be solved with a more structured form. You know, one that isn't just a "short description of problem" with a submit button, and instead one that has sections for "version of software impacted", "activity performed when error occured", and "process to reproduce bug activity", as well as some other data that the reporting form automatically pulls, like the current OS, what versions of standard runtime DLLs are installed, etc.

People who lack the vocabulary to describe a problem will of course be unable to accurately describe the problem. You will invariably get reports that are either little more than long screeds of profanites, or extremely vague about what happened, and neither is terribly useful.

Asking "what were you doing when the problem happened?" Is more likely to get useful feedback, as the user likely does have the needed vocabulary to describe that aspect of the problem. At the very least, this helps you to reproduce the bug yourself, and get the needed information yourself. You can't get that from a report that basically says "your fucking program broke, asshole!"

Another thing to consider, from the programming end, is to make your error windows a little more intuitive than just "error occured! Click OK to terminate!" Try putting something a little more useful in the error dialog that describes what kind of error happened. Yes, this means you can't be lazy and call a general error reporting dialog sub that kills the process as a local function, and makes the code more complex, but how can you expect your users to know what your program is doing if they don't have the source code, are forbidden by copyright law from disassembly, and are fed BS generic error messages as output? They aren't psychic anymore than you are, afterall. That convenient catch all error dialog routine will end up shooting you in the foot. Better to fire a more useful dialog, then fire a generalized exit and cleanup emergency termination sub immediately afterward instead.

What I am getting at here, is that a fair portion of the blame for bad bug reports is on the developer's end intrinsically, for expecting the user to be aware of things that they simply can't be aware of, and getting frustrated when the user is unable to eruditely explain the full nature of the bug afterwards. Don't commit the dire sin of believing that because you know how it works, that everyone else will have that knowledge as well. (Likewise for using debugging tools, and getting at the root of the problem.)

Not trying to be rude or adversarial or anything, just pointing out the obvious causes for this particular problem. (Bad bug reports.)

Re:Sounds like a huge risk (2)

Todd Knarr (15451) | about a year ago | (#43861267)

The response isn't necessarily to fix the bug. The response is to mitigate the risk due to the vulnerability. One way is to fix the bug that's behind it. Another is to change configurations or add additional layers to remove exposure due to the bug. For instance there was once a vulnerability in SSH caused by one particular authentication method. Since that method was rarely used and there were alternative ways of doing the same kind of authentication, the most popular immediate solution was to just disable that authentication method. 5 minutes to change a config file and restart sshd and you're done. I'm not sure they ever did fix the bug, and if they did it took at least weeks, but that didn't stop people from protecting themselves within a matter of hours after the problem was made public.

Re:Sounds like a huge risk (1)

AmiMoJo (196126) | about a year ago | (#43861275)

You have to assume that someone else already discovered the problem and is selling it on the exploit market.

Re:Sounds like a huge risk (1)

Sqr(twg) (2126054) | about a year ago | (#43861295)

This is not a deadline for issuing a fix. What TFA is talking about is the delay before you inform the public about a bug that is being actively exploited i.e. one that the bad guys already know about. This gives end-users the option of not using the buggy software at all until a patch is available.

Re:Sounds like a huge risk (1)

fuzzyfuzzyfungus (1223518) | about a year ago | (#43861323)

What if a bug cant be fixed and systems patched in 7 days time? are they going to cut corners on something like testing?

Going from bug report to design and code a fix, to test, to roll it out to the infrastructure in 5 working days seems like an impossible benchmark to sustain even with the super brainiacs working at google

There isn't a good alternative: If a bug is already being actively exploited, the clock started ticking before Google even knew about it, you just didn't know it yet. The secret is already out, at least one attack system is in the wild, etc. If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual. If somebody tells the customers, at least some of them might be able to mitigate the risk.

There's room for risk-acceptance bargaining in situations where a bug isn't believed to have gone wild(and so you can balance 'risk of it going wild before we fix' with 'quality and adoption of the fix we have time to build' when deciding how much time to grant); but with bugs already in exploitation, the 'risk of it going wild' is already 100%, starting even before the conversation begins.

Re:Sounds like a huge risk (1)

bill_mcgonigle (4333) | about a year ago | (#43862009)

If nobody tells the customers, they risk getting owned and don't know to take precautionary measures above and beyond the usual.

Exactly. Here's a proposal I made here last year on something called Informed Disclosure [slashdot.org] . Leaving customers in the dark when a workaround that will protect them exists - that's not 'Responsible'. And if it's critical enough, there's always the workaround of disconnecting affected systems. Whether it's 60 days or longer or shorter, customers deserve to know and many vendors will abuse the grace period given the chance.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43862471)

You have to remember that vendors won't like you claiming there are major defects in their products (whether it's true or not is irrelevant). They will threaten you if you disobey them, and can sometimes follow through with those threats.

This can be solved with anonymity, but then you can't rely on your reputation to prove that people should take you seriously. Full disclosure still works even if you're anonymous, because your claims can be verified. Without full disclosure, there's no way to determine if you're telling the truth, that your workaround work or configurations you haven't tested are affected.

Vendors will also deny your claims. People believe big companies over individuals, even if they're lying.

Re:Sounds like a huge risk (1)

epine (68316) | about a year ago | (#43867979)

I totally agree. Seven days is long enough for a vendor to formulate a sober verbal response and run it through channels when their customers are already being rooted due to eggregious failings in their software products.

At the very least the customers can increase vigilance around the disclosed vulnerability.

Sure wouldn't hurt if this policy leads to fewer eggregious and embarrassing software flaws in the first place.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861621)

If a company isn't able to come up with a straightforward solution, they can contact the researchers and ask for time and help in patching the security hole prior to disclosure. A responsible whitehat is going to be willing to work with a company that is responsive and proactive.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43861643)

You imply that nobody will exploit the security hole before the researcher discloses it. This is false, as black hats are actively searching for the same vulnerabilities and will independently find them, sometimes sooner, sometimes later. Pandora's box doesn't open in seven days, it was open when the code was released. Sixty days of non-disclosure gives companies a false sense of security (by obscurity) that the week deadline hopes to correct.

This also empowers the user. A company can release a quick and dirty patch for (sane?) users which value security over stability and later release a stable one. After all, who wants their e-mail to be publicly accessible for 53 more days while the company tests the patch to ensure there isn't a second of downtime?

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43862977)

I agree, but we've lost this battle. The majority of users have been convinced that they don't want to know about vulnerabilities, and have been trained by Microsoft/Cisco/Oracle to shoot the messenger.

Talking about product defects without permission is professional suicide now, it's sad.

Re:Sounds like a huge risk (1)

Synerg1y (2169962) | about a year ago | (#43861645)

That's what I was thinking... 60 days is a bit long, it's more than enough to scope out a network, gain access, and execute the vulnerability. 7 days is a bit short, not enough time to test, validate, or run through QC. Not sure why google's leaning on the other extreme, but why not compromise at like 21 days with empathy towards more advanced development cycles.

Re:Sounds like a huge risk (0)

Anonymous Coward | about a year ago | (#43862675)

If you get pwned during those 7 days you're going to complain it was a bit long without a heads up.

Re:Sounds like a huge risk (1)

silviuc (676999) | about a year ago | (#43881451)

If they can't fix it, they should have mitigating measures in place and at least inform their customers of the problem... This usually does not happen and people get hacked.

Change controlled environments? (1)

countach44 (790998) | about a year ago | (#43861095)

What about coporate environments that are strictly change controlled? The extra visibility may produce significant risk to systems that cannot be patched in such short order...

Re:Change controlled environments? (1)

SJHillman (1966756) | about a year ago | (#43861179)

Every company I've worked with that has any sort of change control procedures generally has a specific policy for critical/emergency updates. Some of those policies are "apply now, ask questions later" whereas some have a specific policy of "it doesn't matter, ALL changes go the normal route and we'll take the risk." The key is having a policy that at least acknowledges the risk of delaying.

Re:Change controlled environments? (1)

Todd Knarr (15451) | about a year ago | (#43861211)

They're already at significant risk due to the vulnerability. The only difference is that now they have to acknowledge and mitigate that risk instead of pretending it isn't there.

Re:Change controlled environments? (1)

anthony_greer (2623521) | about a year ago | (#43861257)

There should be protocols in place for urgent or emergency out of cycle changes, it usually involves the two or three key technical people agreeing with a manager and a key business decision maker on a course of action and executing it, Any paper work is done by the manager(s) while the technical people fix the issue right then and there.

Re:Change controlled environments? (1)

taviso (566920) | about a year ago | (#43863037)

Hackers don't give a shit about your change control, they're not going to give you a head start because you're slow to respond to threats.

How does not telling anyone that people are actively exploiting this change that?

Active exploits (1)

Archangel Michael (180766) | about a year ago | (#43861387)

Active Exploitable (in the wild) Security flaws should have ZERO day disclosures. And companies should be required to offer up mitigation tips for people who have software that isn't patched.

Re:Active exploits (1)

gmuslera (3436) | about a year ago | (#43861765)

The problem must be solved as soon as possible, but as could take a bit of time to find the exact problem and test the solution, better to give a few days. Anyway, once the cause is clear, warning users (without disclosing the extent of the problem at the point of making more people exploiting it) so they can take measures to mitigate it should be prioritary. And putting an standard time limit of a week before full disclosure avoid companies to sit on vulnerabilities without doing anything about them for months, or worse, suing under DMCA or similar whoever try to warn about the problem.

Re:Active exploits (0)

Anonymous Coward | about a year ago | (#43862719)

There is always an emergency solution: pull the power cord. I really really really don't want to get pwned, why not give me the opportunity to do that?

Failed (0)

Anonymous Coward | about a year ago | (#43861661)

If I didn't receive any response from a vendor within 7 days of report then it might be worth it.

But they can't honestly expect a company to receive, process, and understand the issue within 7 days, create a patch, QA the patch and deploy within 7 days. Leading the kind of response time would introduce more problems than it would fix.

7 days? (1)

TheSkepticalOptimist (898384) | about a year ago | (#43861749)

Google can push out 20 versions of chrome in 7 days.

What good is it (0)

Anonymous Coward | about a year ago | (#43861783)

What good is it to protect the software from vulnerabilites when the android base is running 2.3 with no hope of upgrade from the carriers. Sure we can secure PCs but not most phones.

They don't expect 7 days...they want less than 60 (1)

Anonymous Coward | about a year ago | (#43861955)

They're not expecting to get 7 days but they'll reach a compromise close to what they actually want which is probably a couple of weeks, may 30 days.

Personally I think that 2 weeks is reasonable.

You could get into trouble if the guy who knows the intricacies of that area is on holiday/leave for those two weeks but that's an education/complexity problem that you should never place yourself in.

It all relies on having good testability so that you're confident that the changes have no side effects.

Who can afford this policy? (-1)

Anonymous Coward | about a year ago | (#43862013)

Google for one!

Always knew the company was just a self-serving, evil pit.

By the way, everyone, it's time to release your medical records to the public... or at least to Google.

Insecure throughout the year (1)

A beautiful mind (821714) | about a year ago | (#43862269)

If we ask the question: "for how many days in a year is a specific browser/application vulnerable to an unpatched exploit?", then we get awful numbers. There are plenty of applications used by millions of people where that number is more than half of the year.

The 7 day limit is probably a compromise between trying to get the vendor to fix the vulnerability that is actively being exploited and disclosing the information and thus increasing the pool of people who'd use the exploit.

For vulnerabilities where there is no known active exploitation, we should assume that there is. 30/60day delays are unforgivable.

App approval (3, Insightful)

EmperorOfCanada (1332175) | about a year ago | (#43862563)

If one hour ago I was notified of a flaw in my app, and 59 minutes ago I fixed it, and 58 minutes ago I submitted it for approval it could easily be a week before it get approved.

I would say that after a week they should notify that there is a flaw, but not what the flaw is. Then maybe after 30 days release the kraken (exploitable flaw that is).

Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.

Most companies do seem pretty slothful in fixing these things but pushing for a company to process the flaw, analyze the flaw, find a solution, assign the workers, fix it, test it, and deploy it in under a week seems pretty extreme.

Re:App approval (0)

Anonymous Coward | about a year ago | (#43864175)

Because we know that it's impossible for two people to discover the same flaw, so they should keep it hush-hush? That's naive.

The purpose of releasing it in a week is to warn the people that have defective pacemakers that they need to take extra precautions. If someone disrupts my pacemaker with a bug you knew about months ago but kept it quiet, then you are the vile human being.

Re:App approval (1)

phantomfive (622387) | about a year ago | (#43864415)

Let's say they discover a pacemaker flaw where a simple android app could be cobbled together to give pacemaker people nearby fatal heart attacks. If they release that in a week then they are vile human beings.

Remember, they're talking about vulns that are actively being exploited, which means people are already dropping dead because of pacemaker problems.

The correct thing to do is to let people know so they can stay at home and reduce exposure to attackers until the flaw is fixed.

Re:App approval (1)

EmperorOfCanada (1332175) | about a year ago | (#43865665)

Very good point but I am thinking about the human waste products that actively look for known exploits to exploit. For example there are a whole lot of people who wait for OS updates to see what has changes so they can run out and make exploits knowing that a huge number of people don't upgrade very quickly.

But yes giving information so that people can run for the hills can be useful.

It all boils down to information being power. So who will best use that power should be the key question before releasing the information. Not paternalisticly but realistically who will, in actuality, use the information for good.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>