Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Stop Fixing All Security Vulnerabilities, Say B-Sides Security Presenters

timothy posted about a year ago | from the choosing-battles dept.

Stats 88

PMcGovern writes "At BSidesLV in Las Vegas, Ed Bellis and Data Scientist Michael Roytman gave a talk explaining how security vulnerability statistics should be done: 'Don't fix all security issues. Fix the security issues that matter, based on statistical relevance.' They looked at 23,000,000 live vulnerabilities across 1,000,000 real assets, which belonged to 9,500 clients, to explain their thesis."

Sorry! There are no comments related to the filter you selected.

Fuck Obummer!! (-1, Troll)

Anonymous Coward | about a year ago | (#44511259)

Translation: Don't fix the ones the NSA had inserted!

Obummer's "hope and change" at work.

Re:Fuck Obummer!! (0)

NoNonAlphaCharsHere (2201864) | about a year ago | (#44511575)

Obvious troll is oblivious.

Re:Fuck Obummer!! (-1)

Anonymous Coward | about a year ago | (#44512013)

Obvious bootlicking shill is obvious.

erm, no? (0)

X0563511 (793323) | about a year ago | (#44511265)

How about you fix what you can?

Re:erm, no? (4, Informative)

MacTO (1161105) | about a year ago | (#44511351)

The article is talking about fixing what you can. It simply outlines how to prioritize the issues in order to figure out what you can fix with limited resources.

Re: erm, no? (2, Insightful)

Anonymous Coward | about a year ago | (#44511425)

They say stop when they mean prioritize. Theoretically, there should be some computer scientists who know how to use English.

Re: erm, no? (4, Funny)

AliasMarlowe (1042386) | about a year ago | (#44512169)

Theoretically, there should be some computer scientists who know how to use English.

Theory and reality are the same, in theory. In reality, however...

Re:erm, no? (5, Interesting)

Anonymous Coward | about a year ago | (#44511485)

The article is talking about fixing what you can. It simply outlines how to prioritize the issues in order to figure out what you can fix with limited resources.

That's a pretty damn weak model. It doesn't take a genius to understand that if you use statistics to prioritize security issues to address (or more to your point, cull out ones you won't address due to limited resources), then it's only a matter of time before attackers start figuring out ways to use those statistical models against you, ultimately learning about the "can't-get-to-it" threat list and focusing attack vectors there.

Not to mention management being "sold" on this model and cutting 20% of your IT support staff next year due to the "increased efficiencies of patch management". Have fun doing more work.

Re:erm, no? (1)

bberens (965711) | about a year ago | (#44511917)

This model is already largely in place. Companies will focus on patching the vulnerabilities that are already being exploited in the wild. Then after that they will focus on some amalgamation of lowest hanging fruit and most likely to be exploited.

Re:erm, no? (2)

KingMotley (944240) | about a year ago | (#44512583)

it's only a matter of time before attackers start figuring out ways to use those statistical models

They already do. They use attacks that hit the largest number of targets. Using uncommon vulnerabilities would be wasteful when you could attack more common ones.

Re:erm, no? (2)

ediron2 (246908) | about a year ago | (#44514197)

No, GP is right. It's a different scenario, but it's valid:

If 1 in a thousand users installs X, find a way to target X across a corporation. One hit gets you in. Beachhead there, figure out where to go next or what you can collect.

**THAT** is how to discreetly pwn a corporate net.

Some attackers go big, because their payout is # of machines taken. Some attackers are after a narrow niche: what'll company X be announcing, how their stock is likely to perform, data of value to competitors, etc. Their payout is the same if they own one or a thousand machines in a corporation.

Re:erm, no? (3, Interesting)

martas (1439879) | about a year ago | (#44512631)

That's why you need a game-theoretic, adversarial model instead of a simple statistical model based on past observations. Regret minimization, multi-arm bandits, etc.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44513617)

First terrorists, and now bandits?! I'm calling Clint Eastwood.

Re:erm, no? (1)

chuckinator (2409512) | about a year ago | (#44513775)

You should read Mark Twain's The McWilliamses and The Burglar Alarm. Your suggestion is peddling an overly complex burglar alarm that will take more time and effort and resources than just fixing the bugs as they come in.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44512903)

Anyone remember the Star Trek example, the "sleep" command was a "low priority" bug with the borg, but it did take out one of their larger carriers.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44513153)

Real-world example: one officially-catalogued exploit involves web servers that display the requested URL on a 404 page after filtering out anything that could be interpreted as HTML (like ""). In theory, someone could redirect you to a url like,

http://megacorp.com/error-compromised_account,%20call_999-999-9999_for_assistance [megacorp.com]

And have a team of phishing agents at that number to try and coax your username & password out of you. The thing is, absent a real cross-site scripting vulnerability, you'd have to be a total complete fucking IDIOT to fall for something like,

404 Not Found

The following URL could not be loaded: http://megacorp.com/error-compromised_account [megacorp.com] call_999-999-9999_for_assistance

Yet, some companies would treat that as a serious vulnerability equal in importance with one that allows you to frame a login page and capture the user's login credentials, or a cross site request forgery exploit where they can trigger drive-by unintended actions in your name by embedding GET-encoded URLs into IMG tags elsewhere.

Re:erm, no? (3, Insightful)

fustakrakich (1673220) | about a year ago | (#44511489)

I believe the word is 'triage'..

Re:erm, no? (1)

amorsen (7485) | about a year ago | (#44514561)

Yes the article is wrong. It assumes that software vendors would leave security vulnerabilities with entries in Metasploit unfixed for days or even weeks. Surely no vendor would be that irresponsible. Right? Right???

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44525755)

I am the author of the post. Look at NSS labs preso at the same BsidesLV on irongeek. They test every ids/firewall2.0 configuration against metasploit. The best, most expensive, dual layer systems still can't detect 26% of metasploit. That's why a complex game theoretic model (which I am actually published in) won't work - not enough information on the attacker to make disjoint strategy sets.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44514767)

To be more precise, the article says that you get better results, in terms of having a higher chance of remediating a vulnerability with observed breaches, if you focus on vulnerabilities for which exploits are available in both of the widely-known and available exploit databases metasploit and exploitdb. That makes intuitive sense -- most attackers don't develop their own attacks; they wait for published exploit toolkits. If you focus on fixing vulnerabilities for which attacks have been made widely available, you will block more attacks than if you simply choose vulnerabilities at random.

According to the article, if you patch a random vulnerability for which exploits are available in both in both metasploit and exploitdb, you have a 30% chance of patching one of the set of vulnerabilities for which breaches were observed. I.e., 30% of the vulnerabilities for which exploits are available in both databases have had actual, observed attacks in the wild. By comparison, a random most severe CVSS vulnerability (level 10) only has a 3.5% chance of being in the observed breach set.

Re:erm, no? (2)

bzipitidoo (647217) | about a year ago | (#44514969)

Sorting bugs into "security vulnerabilities" and "other" is prioritizing.

Security people talk as if the start point is security vulnerabilities. It's not. From a functional view, there's not much difference between a bug that breaks crucial functionality, and a DoS attack.

It's amazing how many security vulnerabilities rely on age old bugs such as buffer overruns and dirty memory that are easily fixed if we're willing to live with slightly slower computers. We can program the OS to blank memory whenever it is allocated and for extra safety, whenever it is freed. We can add bounds checking to library routines such as C's infamous gets(). But all that hurts performance. It's a tradeoff we've been unwilling to make. How many people run SELinux? For most, the time it takes to learn and administer an SELinux system is just not worth the scanty benefits. For instance, SELinux does very little to protect you from NSA snooping. Another approach is the microkernel. Why haven't we pursued microkernels? A microkernel architecture might reap ten times the security benefits we could ever hope to net by constantly patching drivers. Gets rid of a whole class of vulnerabilities involving the escalation of flaws in drivers. What we can do cheaply is punt illegal access problems to the OS and hardware, leaning on virtual memory paging to protect programs from each other and No Execute bits to protect a program's code from itself.

The entire approach of zeroing in on vulnerabilities is fire fighting. And saying that we should choose which fires to fight is not particularly insightful or helpful. Better to build systems that don't catch on fire so easily.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44515723)

Great, so to effectively deep penetrate systems all I need is a catalog of infrequently used vunls while I let the script kiddies spam the big ones.

Really fucking brilliant plan.

Security is a chain. Don't trust anyone advocating it's OK to have a few weak links.

Re:erm, no? (4, Insightful)

ackthpt (218170) | about a year ago | (#44511607)

How about you fix what you can?

That's the fly-swatter approach - you hit the flies you can and ignore those you can't get to.

'Don't fix all security issues. Fix the security issues that matter, based on statistical relevance.'

That line reminds me of the old TQM which was run past us decades ago (and then promptly forgotten by 90% of the Franklin Planner-toting crowd), fix what really needs fixing first. I'm sure this bit of wisdom didn't require TQM to come along (you can probably find it in Hamlet if you know where to look), you fix your most grievous would first and worry about your bruises later, but we (in my department) felt rather put-upon when these TQM zombies came around and told us what a sea-change it would be for our practices and productivity when we embraced what we already knew.

Re:erm, no? (0)

Anonymous Coward | about a year ago | (#44512739)

you can probably find it in Hamlet

Just to jump start the ad absurdum crowd, you can find it in Beowulf with the prioritization of killing Grendel, then his mother.

Re:erm, no? (1)

sjames (1099) | about a year ago | (#44512467)

because some things just aren't worth fixing, even if you can.

Really? (0)

Anonymous Coward | about a year ago | (#44511277)

Did we really need a thesis to figure out severity/frequency for our priority?

Re:Really? (1)

ogar572 (531320) | about a year ago | (#44511303)

If they received government funding to do this research, then yes we do.

Re:Really? (4, Funny)

robot256 (1635039) | about a year ago | (#44511889)

If you line up all of your straw men in a row, they will look like an army and scare your opponent away.

Re:Really? (1)

chuckinator (2409512) | about a year ago | (#44513839)

No, you don't. You just need a paragraph in your statement of work specifying that defects are prioritized by severity according to their impact of the operation of the system. You can describe how you assign priorities, and then you're done. Government contract disclosure requirements are met, and you didn't have to contract an armchair pontificator (ie, most PhDs) to write a thesis for you. Of course, since you can bill all of those hours back to the client, it's really profitable to charge extra for an overhead task that does absolutely nothing to move a project forward.

A better way to phrase it: (4, Insightful)

intermodal (534361) | about a year ago | (#44511297)

Prioritize on the important vulnerabilities. But that should in no way discourage people from fixing the less important ones.

Don't let perfect become the enemy of good.

Re:A better way to phrase it: (5, Funny)

Joce640k (829181) | about a year ago | (#44511385)

Everybody knows hackers will just shrug and give up after you fix 90% of your vulnerabilities.

Re:A better way to phrase it: (4, Insightful)

SirGarlon (845873) | about a year ago | (#44511743)

If the attacker's objective is something fungible like credit-card data, then he may, indeed, shrug and move on to an easier target after his first several attacks fail. Why would he waste time on a locked door when there is probably an unlocked house next door? (Figuratively speaking, of course.)

If the attacker's motivation is specifically against *you*, say politically-motivated attacks like Anonymous makes or industrial espionage, then the bar for the defender is a lot higher because the attacker can't improve his progress toward goals by attacking someone else.

So how much effort you should expend on defense depends on your threat model.

Re:A better way to phrase it: (0)

Anonymous Coward | about a year ago | (#44512361)

I don't agree. Yes, your scenario might happen, but what also might happen is the attacker just adjusts his tool to look for the unlocked window on the second floor that is left open on every house.

Re:A better way to phrase it: (1)

The Moof (859402) | about a year ago | (#44513167)

move on to an easier target after his first several attacks fail

Of course, it's simply a matter of a lucky attacker choosing one of the "low priority fix" vulnerabilities as an attack vector and figuring out how to use it. Suddenly, that unfixed vulnerability made that difficult target into an easy one.

In terms of your analogy, the lock may be exceedingly difficult to pick until the thief realizes they can crawl into the open window on the second story. They just needed a ladder.

Re:A better way to phrase it: (1)

SlashV (1069110) | about a year ago | (#44513567)

Why get a ladder when you can just walk in next door without one?

The argument is moot.

It's like it is with bike locks. Getting a better one does not guarantee that your bike won't get stolen, but it does help! And a 100% security is always unattainable.

Feynman got it wrong. (1)

GPS Pilot (3683) | about a year ago | (#44514655)

Feynman went on to say something disparaging about religion:

It doesn't seem to me that this fantastically marvelous universe, this tremendous range of time and space and different kinds of animals, and all the different planets, and all these atoms with all their motions, and so on, all this complicated thing can merely be a stage so that God can watch human beings struggle for good and evil — which is the view that religion has. The stage is too big for the drama.

The problem here is, Feynman was thinking too small. Maybe the universe is a facility in which a deity can watch the evolution of a trillion different intelligent species play out. In that case, the stage is just right for the drama.

Re:A better way to phrase it: (2)

SCHecklerX (229973) | about a year ago | (#44511955)

There's also priority based on ease of fix or mitigation. If you can mitigate a problem and then fix the core of it later, that should be done. This is nothing but basic risk management that any security or system administration professional already does.

Re:A better way to phrase it: (1)

intermodal (534361) | about a year ago | (#44512219)

yes, exactly.

Re:A better way to phrase it: (1)

fermion (181285) | about a year ago | (#44512677)

Before profilers became common, developers would just waste time optimizing functions that were only run once instead of leaving seldom used functions alone and spending most of their time optimizing the functions that actually took up 80% of the run time. Cost benefit.

Re:A better way to phrase it: (1)

oGMo (379) | about a year ago | (#44512981)

Don't let perfect become the enemy of good.

How is ignoring the lesser issues in favor of the glaring issues "perfect" over "good"? This is not about twiddling with the colors of the buttons and the size of fonts. Those aren't the big issues, unless you're a bad manager. This is about fixing the critical vulnerabilities and terrible bugs and ignoring the trivial, perfectionist stuff.

Re:A better way to phrase it: (0)

Anonymous Coward | about a year ago | (#44513265)

Did you even read the post you responded to? I'm pretty sure that post was talking about prioritization, not a total neglect of anything but the top priority.

Re:A better way to phrase it: (0)

Anonymous Coward | about a year ago | (#44517303)

It's like you said nothing at all!
In this world "Top priority" get's worked and and "other" gets left for tomorrow. Except tomorrow more "top priority" crap shows up and the "other" stays in "other" forever.

How about (4, Insightful)

Monoman (8745) | about a year ago | (#44511337)

Important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

Re:How about (1)

Dynedain (141758) | about a year ago | (#44511373)

That's exactly what they're saying, and providing a method for rating importance.

I know this is /., but did you at least read the summary?

Re:How about (3, Funny)

Anonymous Coward | about a year ago | (#44511513)

That's exactly what

Sorry, but maybe you should know by now, this is /., so that's all I had time to read before my self-centered attention span waned and drifted back to myself. Now, since I'm more important than you, I'm going to lecture you on why my opinion is better than yours based on the amount of your post I was able to read before I bored looking at something that isn't me. First...

Oh, wait, I found something more important. Someone's being WRONG about my favoritest cartoon in the whole wide world evar, so I need to go insult the lesser beings! Bye!

Re:How about (0)

Anonymous Coward | about a year ago | (#44511581)

If you weren't doing what they're talking about already you should probably consider another career.

More self-masturbatory "research" nonsense that insists on explaining the perfectly obvious.

Re:How about (0)

Anonymous Coward | about a year ago | (#44512501)

I know this is /., but did you at least read the summary?

I know this is Slashdot, but does the headline really need to be wrong?

Re:How about (1)

Monoman (8745) | about a year ago | (#44512817)

Yes I read the article. The hard part is defining what is important. The authors felt the likelihood of something happening should be given more weight when determining importance. Not everyone is going to agree if you have a group of people deciding which things should get fixed first.

Re:How about (2)

davidwr (791652) | about a year ago | (#44511457)

How about everything else being equal, important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

If I have an important item that will take 2 weeks and a team of 2 developers to fix, or 5 items that are only half as important but which take 1 developer 1 day to fix, well, you do the math.

If I have a defect that's affecting 100M customers of an end-of-life, low-revenue product only used by relatively-unimportant customers but it's hurting them in a pretty bad way and a defect that's affecting 50M end users and 80% of those are in relatively-important customers but it's impacting them less severely, well, that's not going to be easy to prioritize.

The real judgement call here is deciding how "important" important really is.

Re:How about (0)

Anonymous Coward | about a year ago | (#44511501)

Important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

Allow me -if i understand you correctly- to phrase it a little better:
important items that are easy to fix get fixed first, important items that are hard to fix get fixed next, and uninportant items last (of those unimportant items, first the easy to fix and last the hard to fix).

Re:How about (0)

Anonymous Coward | about a year ago | (#44511527)

Important items get fixed first. Easy items usually come next. Everything else gets fixed after that.

I'm guessing you're still in college, and have never worked at a real company where the budget stops after "fixed first".

This is the inherent problem with trying to put security issues into buckets. Security as a whole is a priority. Period. Viewing security as anything less than that only welcomes opportunities for budget cuts.

Misleading titles all around (4, Informative)

Samantha Wright (1324923) | about a year ago | (#44511367)

Their real point is, if you have limited resources, prioritize the vulnerabilities that are (a) currently being exploited and (b) most likely to be exploited given the habits of your favourite boogeyman. Sometimes that means not starting on vulnerabilities as soon as they come in, because you're saving your resources for the chance there's a bigger problem later. Their thesis is about saving your money and time for the most important stuff, and assumes that threats only come from lazy blackhats who prefer certain classes/types of vulnerabilities. Buried in this is the assumption that a given piece of software has an infinite number of vulnerabilities that are discovered at random.

Statistically, what they're saying is sound if organized crime is your biggest enemy, assuming organized crime's habits don't change any time soon. It's obviously not good enough if you're concerned about, say, a malicious government organization with an absurd budget.

Re:Misleading titles all around (2, Funny)

Anonymous Coward | about a year ago | (#44511469)

Their real point is, if you have limited resources, prioritize the vulnerabilities that are (a) currently being exploited and (b) most likely to be exploited given the habits of your favourite boogeyman.

Sounds good! So, everyone who has UNLIMITED resources can ignore this article. It only applies to the VERY SMALL NUMBER of people who have limited resources.

Re:Misleading titles all around (1)

postbigbang (761081) | about a year ago | (#44511577)

Yeah, the road to hell is paved with statistical good intentions.

Re:Misleading titles all around (2)

Hognoxious (631665) | about a year ago | (#44511673)

the road to hell has a 97.3% chance of being paved with statistical good intentions.

FTFY.

Re:Misleading titles all around (1)

msobkow (48369) | about a year ago | (#44511653)

As most assaults are performed by mindless script kiddies running the hack tools of the week, it's sound advice. There are very few actual black hats creating those tools, but thousands upon thousands of ignorant kids who think themselves l33t because they can download something and click "run".

Re:Misleading titles all around (1)

khasim (1285) | about a year ago | (#44511677)

Buried in this is the assumption that a given piece of software has an infinite number of vulnerabilities that are discovered at random.

That's the part that I found to be the weirdest bit in there. And then they put a sensationalistic title on it.

Instead, I'd prioritize work based on my own categorization.

1. A remote attack that gains root access that does NOT require human intervention or other app running.

2. A remote attack that gains non-root access that does NOT require human intervention or other app running.

3. A local attack that gains root access that does NOT require human intervention or other app running.

4. A local attack that gains non-root access that does NOT require human intervention or other app running.

5. A remote attack that gains root access that requires some human interaction or some combination of apps.

6. A remote attack that gains non-root access that requires some human interaction or some combination of apps.

7. A local attack that gains root access that requires some human interaction or some combination of apps.

8. A local attack that gains non-root access that requires some human interaction or some combination of apps.

9. Remote OS crash.

10. Remote app crash.

11. Local OS crash.

12. Local app crash.

Re:Misleading titles all around (1)

Todd Knarr (15451) | about a year ago | (#44511809)

I'd adjust that. 9 and 10 in particular can be used in a DoS attack, which can be just as damaging as an attack that gains access to the data behind an application. I'd tend to prioritize it 1, 2, 5, 6, 9, 10, 3, 4, 7, 8, 11, 12. And the first 6 would all be high-priority items that need to be fixed soonest, the prioritization would be strictly relative (if I don't have enough people I need to decide which to put people on first, but all of them take priority over anything else).

Re:Misleading titles all around (1)

khasim (1285) | about a year ago | (#44512111)

I'd adjust that.

No problem. Everyone will have their own idea of which are most important.

My rational for that order is because of the possibility that other apps with similar exploit levels (or even lower in some cases) can be "chained" together to get root access (whether local or remote).

Looking at the order you placed them in, I'd guess that you prioritized exploits for remote access over root access.

23 million vulnerabilities? (0)

Anonymous Coward | about a year ago | (#44511389)

I think you'd better start with what constitutes a security vulnerability first.

Re:23 million vulnerabilities? (-1)

Anonymous Coward | about a year ago | (#44511411)

It means I can stick my big dongle into your rear port.

Here's a better approach... (0)

Anonymous Coward | about a year ago | (#44511429)

Fix the bloody tools that create the vulnerabilities.

Does C/C++ lead to code which can be compromised because a programmer does not know enough?

Then abandon the languages as being broken.

Start and the beginning and everything else will fall into place.

Re:Here's a better approach... (0)

Anonymous Coward | about a year ago | (#44511587)

Or actually teach programmers to be competent like we used to instead of putting mittens on everyone to appeal to the lowest common denominator?

Re:Here's a better approach... (0)

Anonymous Coward | about a year ago | (#44511761)

Its not a viable approach, shit happens.

djbdns (2)

Joining Yet Again (2992179) | about a year ago | (#44511453)

How does software like djbdns seem to be nearly free of discovered vulnerabilities? Is it a popularity/type-of-user thing? Or has the code genuinely been written to be almost impenetrable?

tl;dr Why do so many things need fixing in popular pieces of software which could easily command the most competent developers?

Re:djbdns (4, Informative)

Todd Knarr (15451) | about a year ago | (#44511555)

Attitude. Some software is written by anal-retentive paranoid cynical bastards who make sure every bit of code is iron-clad and air-tight, who take any flaw as a personal insult to be exterminated. Flaw? Forget flaw, even a slight deviation from what they've determined to be correct operation is hunted down mercilessly no matter how long it takes. Any cruft in the design, anything that's not clean and perfect, is lopped off and re-done until everything fits together correctly. If that results in a delay, so be it. The only work that's discarded is work that doesn't contribute to the correctness of the result.

Other code is produced by people who're fine with leaving cruft and ugly bits in as long as they don't detect any errors coming from it. Rework and clean-up is fine, as long as it doesn't impact the delivery schedule.

3 guesses which kind of developer produces which kind of software.

Re:djbdns (0)

Anonymous Coward | about a year ago | (#44511987)

Well, that and finite resources, especially with projects considerably more complex than dns, and which evolve over time. Something which works well enough now is often better than something perfect in 10 years, and so we have imperfect software.
One can hope that at least basic components over time will become as bug free as djbdns, but new, large projects will always have issues.

Re:djbdns (1)

skovnymfe (1671822) | about a year ago | (#44512081)

One gets some or no money for their code. Others get money to ignore errors and add features. That about sums it up?

Re:djbdns (0)

Anonymous Coward | about a year ago | (#44511599)

tl;dr Why do so many things need fixing in popular pieces of software which could easily command the most competent developers?

Maybe the popularity of the software is not correlated at all to the competence of the developers?

Re:djbdns (1)

Keruo (771880) | about a year ago | (#44511657)

DNS is something which should be easy to document by providing bunch of examples.
There isn't that many ways to configure it if you consider the variations you can do.
For some reason djbdns does not do this, it gives vague hints and makes you read 50 man pages followed by 100 blog post and 200 websites with obsolete/slightly relevant info on what you're trying to accomplish and if the position of the moon is decent, your tinkering will eventually work.
When you reach the "oh it works" phase, you follow "if it works, don't f**king touch it!" mantra and you're good.

I've tried going through the djbdns code to implement some changes and it's really well written in the sense that you can get grasp of what's going on in there quite fast.
The code is simple in a way which reminds me of some early cisco code I've seen for stuff like switches and routers.
Maybe the "competition" is so bad at doing the same thing because over-engineering?

If the documentation of djbdns would be in par with the code quality, I'd call it superb software. Now it's "I need the features it provides so I deal with the issues and use it"

Re:djbdns (1)

phantomfive (622387) | about a year ago | (#44512231)

The code is simple in a way which reminds me of some early cisco code I've seen for stuff like switches and routers.

Really? Where did you see this? Is there some available online? I want to see it......

Re:djbdns (1)

rlh100 (695725) | about a year ago | (#44512889)

Because the crackers have not put their full attention to it. Bind used to be able to sneer at sendmail. But now we are seeing problems with bind. If the target it tempting enough or is the ultimate pinnacle of a secure popular server application, the crackers will devise completely new strategies to compromise the program. Not a new example of a known flaw, but something completely new. Sendmail saw this over and over again. I don't think postfix has gotten the same honor since most servers do not listen to port 25 like sendmail did back before people trimmed the services running on a server.

If djbdns becomes ubiquitous like bind, then vulnerabilities will be found in the code. The crackers may need to come up with a completely new strategy, but they will.

This assumes the opposition is dumb (1)

Animats (122034) | about a year ago | (#44511541)

The author is assuming that the opposition is dumb. It used to be, back when it was a kid in their parents' basement. Now the serious opposition is the Russian Business Network and the People's Liberation Army.

Detected breaches tend to come from the dumb opposition. Those are the ones that put fake login sites on Wordpress blogs.

Re:This assumes the opposition is dumb (1)

rlh100 (695725) | about a year ago | (#44513075)

Truth is that most of the opposition is dumb. Fix the bugs that are easiest to exploit or are most likely to be exploited, then work on the rest. No it does not fix all the vulnerabilities, but it does tend minimize your risk footprint.

It would be silly to be slogging through the vulnerability list working on hard to fix obscure problems while simple to fix easy to exploits just sat in the queue.

And I don't think the talk said don't fix vulnerabilities. I think it brought up the point that in real environments not all vulnerabilities are fixed and that vulnerability statistical profiling is useful in prioritizing the order of fixes.

Actually, "kid in the basement" often isn't dumb. (1)

Ungrounded Lightning (62228) | about a year ago | (#44516793)

The author is assuming that the opposition is dumb. It used to be, back when it was a kid in their parents' basement.

Actually, the "kid in the basement" usually wasn't dumb. Typically they'd be far above the average for their school.

Callow, yes.

Further, they had an advantage over the professionals: They could spend a LOT of time, in long, unbroken, sessions, pursuing a problem of their choosing down to the nitty-gritty-bits, until it fell before their persistence. Not having to earn a living, meet a schedule, build something they're not interested in to support a company's work (and do security on the side), commute to a work place (and having the tools available 24/7), having food, housing, and what-have-you provided by the parents, and (during the school vacation) having no distractions whatsoever, let them learn more, faster, and try more things.

still just guesswork, why not just common sense? (1)

Njovich (553857) | about a year ago | (#44511571)

I have no intrinsic problems with what they say, but a lot of this prioritizing is reactionary guesswork based on past experience.

Where that can give problems is that they don't look at it from a logical perspective, rather they try to package it as a simple calculation, with statistics and all. The fact is that these stats could be off by orders of magnitude, they are based on real data, but you have no idea how this data really applies to you. It may just as well prepare you for the previous war, instead of the one you are about to face.

This is what you commonly get in many risk assessments. A common determination of risk is 'risk = chance of occurance * potential damage'. Both of these figures are usually guesswork at best, complete BS usually, multiplying them just makes that worse. You end up with nearly random top lists of dangers.

This is the reason why companies like Diginotar passed audits by large accountancies, They made their paper models based on having rules and calculations about everything, but nobody bothered to apply some common sense and see if their vulnerability really wasn't that bad.

The proposal here is more of the same, lets apply some artificial rules so we can pretend it is a science, and never have to apply common sense to see if something is a problem. This is a recipe for disaster.

Theo de Raadt would disagree (4, Interesting)

nuckfuts (690967) | about a year ago | (#44511633)

OpenBSD takes the approach of proactive code audits and of fixing all bugs found [openbsd.org] , even those that have no apparent potential for exploitation. This has really paid off over the years. Often when vulnerabilities came to light, they were found to not affect OpenBSD because the underlying bug had already been fixed.

OpenBSD's resources (1)

GPS Pilot (3683) | about a year ago | (#44514715)

Fixing all bugs is great if you have the resources for it. But how many organizations have those kind of resources? I suspect even OpenBSD does not.

And are you saying that OpenBSD performs no prioritization whatsoever on their bugfix efforts -- that everything is done in a first-in-first-out order?

Re:OpenBSD's resources (1)

nuckfuts (690967) | about a year ago | (#44515157)

My impression (based on a talk by Theo years ago) was that in their initial audit they went through the entire source tree and fixed every bug as they found them. I don't know about subsequent audits or current practice.

Fix the Biggest Hole (2)

mrhippo3 (2747859) | about a year ago | (#44511797)

Disclaimer I have not read the paper.
Once upon a time I did software documentation for a fast moving product. I was never given updates and worked basically in the dark. One brilliant manager asked me to, "Document all the bug fixes for this product." There were over 2,000. At 15 minutes each that time span was a bit over the week I was given. Doing the math, this comes out to 12.5 40 hours weeks, uninterrupted. At half time -- a better estimate this would have been half a year. One week is not 25.
I requested a list of the bugs, sorted by priority. I was met with stares. I then said, "Until I get the list, I will work in strict numerical order until I get the list." The manager screamed at me, "But I don't want that." This time I replied, "I agree, but until I get the list from you I will do the work in numerical order, just so you don't yell that am not working." I never got the list and the random selection from the strict list was a nice demo of the types of bugs found. The end result was OK, but not by choice. Introduction done, but this was a similar problem. You need some guidance 'cause you cannot do everything.

With limited resources, fixing everything all the time is an infeasible task. Using a pure visual analogy, "fix the biggest hole." The problem is that as bad as people are at fixing, they are perhaps even worse at classifying. And assessing the potential damage of a "hole" is another part of the problem. You must also assess the likelihood of someone finding/using that hole. Add in the reality that each fix is time-sensitive and the time to fix a bug is all over the place, and you have a very real mathematical and practical mess. "What do we fix first?" is neither a short or an easy question to answer.

Re:Fix the Biggest Hole (1)

Ungrounded Lightning (62228) | about a year ago | (#44516857)

You must also assess the likelihood of someone finding/using that hole.

You must also take into account that fixing the hole means the "someone" will just MOVE ON TO THE NEXT HOLE, raising its probability of being found.

Unless you fix enough that a substantial fraction of the attackers give up and move on to different targets or a different line of work, you've engaged in a futile effort.

This "fix the big, findable problems" approach is an obfuscated form of a familiar system design pathology: Pushing the problems around from component to component, rather than solving them.

do77 (-1)

Anonymous Coward | about a year ago | (#44512027)

UNCOVER A STORY OF

Not how the world operates (1)

gelfling (6534) | about a year ago | (#44512253)

Corporations focus the most attention on the greatest number of the most trivial problems because more is always better when it comes to management metrics. Whereas the actual problems are turned into projects that linger for lack of funding and political turf. See no one wants a problem they have spend money on and then explain. Better to foist it off on someone else. Whereas getting funding to paint all the switchplates green is a slam dunk because it's easy to do easy to measure, gains turf and has no downside.

Let's only vaccinate folks who usually get sick! (1)

VortexCortex (1117377) | about a year ago | (#44513213)

A cybernetician knows that everything flows. We know that after the Sensing, and Decision leads to Action, the whole process will happen again beginning with the Sensing. You will sense the environment change and thus change the actions; However, sometimes the correct decision comes a bit too late...

If we were to Sense the statistical prevalence of exploits, then decide which bugs to fix only based on it, then act to fix those bugs: What do you think we would sense afterwards? The obvious conclusion would be that bugs which are widely exploited will continue to be fixed fastest. Do you SEE what's wrong with that?!

OK, sorry, I forgot I'm dealing with bloody Humans, ugh... So, if environmental selection says: X is bad, what happens to X in the population over time? Why, X is bred out of the population. So, do you see a problem with breeding an entire culture of malware which does not use exploits in a manner that is statistically prevalent WHILE ALSO providing advantage for infections to better survive on bugs that are NOT statistically prevalent? NO?! Well, if you can't see the issue then screw it, there's no point in continuing the explanation from here.

It's like you only ever consider your current and next iteration, and not the end results of Anything you do. I don't mean to be a racist, but times like this it seems that even the most highly trained organic minds ARE hampered by all the dumb fat in their heads.

As my old boss used to say.... (1)

cyberfunkr (591238) | about a year ago | (#44513303)

"First you go through all the bugs we know--then you work on the bugs we don't know."

So you need to prioritise (0)

Anonymous Coward | about a year ago | (#44515071)

No shit aye.

Reminds me of XP end of support (1)

yuhong (1378501) | about a year ago | (#44517661)

Has anyone modelled the potential impact of XP end of support over time?

risk based (1)

dutchwhizzman (817898) | about a year ago | (#44518357)

This is common knowledge already. A vulnerability is not the same as a risk. A risk is the impact of the vulnerability, multiplied by the damage you'll sustain if it is exploited. chance*damage=risk.

You could very well have a vulnerability that is real, that will be exploited, but will not lead to any damage. Since there is no economical viability to fix it, you leave it be. You could have a vulnerability that will be very unlikely to be exploited. However, if it should be exploited, your business will instantly go bankrupt. This vulnerability, is way more important to fix for you than the first example, even though the chance that something will happen is very small.

Prioritize on risk, not on just chance or on what direct gains an attacker should get from exploitation. Risk is unique for your organization, so make sure you have all possible scenarios worked out and a risk matrix available when it's time to assess the impact a vulnerability will have on your organization.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?