Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Web App Scanners Miss Half of Vulnerabilities

kdawson posted more than 4 years ago | from the hey-we-caught-half-too dept.

Security 68

seek3r sends news of a recent test of six web application security scanning products, in which the scanners missed an average of 49% of the vulnerabilities known to be on the test sites. Here is a PDF of the report. The irony is that the test pitted each scanner against the public test files of all the scanners. This reader adds, "Is it any wonder that being PCI compliant is meaningless from a security point of view? You can perform a Web app scan, check the box on your PCI audit, and still have the security posture of Swiss cheese on your Web app!" "NTOSpider found over twice as many vulnerabilities as the average competitor having a 94% accuracy rating, with Hailstorm having the second best rating of 62%, but only after extensive training by an expert. Appscan had the second best 'Point and Shoot' rating of 55% and the rest averaged 39%."

cancel ×

68 comments

Not a surprise to me. (4, Insightful)

ls671 (1122017) | more than 4 years ago | (#31047860)

> Web App Scanners Miss Half of Vulnerabilities

Well this is no surprise to me. Designing/testing secure systems is much more than scanning for vulnerabilities.

Scanning is only one of the tool to use to accomplish the goal.

Web apps make it so easy to be insecure. (3, Insightful)

Anonymous Coward | more than 4 years ago | (#31048124)

The web was clearly never designed to do even a fraction of what it is expected to do today. Now, neither were computers. But at least when it comes to hardware, we're willing to throw everything away and start from scratch. We don't seem able to do that with the web.

Basically everything about the web today is just one dirty hack upon another bunch of dirty hacks. SSL and TLS are a good example. JavaScript is another. Everything built on top of JavaScript, such as AJAX, is a huge hack. So it's no wonder that it's so damn easy to write insecure web apps.

Furthermore, it doesn't help that the languages and frameworks commonly used to develop web apps are full of holes themselves. PHP is a very good example of this. Even in the hands of a talented and very experienced developer, it's damn near impossible to develop a site that isn't flawed in some obvious way.

We need to throw it all away. Companies like IBM, Sun, SGI and HP used to routinely do this with their computer hardware. We now need to extend that practice to our software systems. We need to start again. But will we? Probably not, and that's quite unfortunate.

Re:Web apps make it so easy to be insecure. (1)

ls671 (1122017) | more than 4 years ago | (#31048178)

Slow down man ! ;-) One step at the time, I suggest that you start with this:

http://news.slashdot.org/comments.pl?sid=1540138&cid=31047988 [slashdot.org]

Re:Web apps make it so easy to be insecure. (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31048500)

A lot of PHP "developers" won't admit it, but Microsoft's .NET platform actually does a far better job of allowing for the development of secure web applications than PHP ever will.

Even inexperienced ADO.NET developers know enough to use paramaterized queries. But I'm not fucking kidding you, I still see PHP code even today where the SQL is generated via string concatenation, without properly escaping input from the user. Whenever I see this sort of code, I immediately send a written letter to whoever owns the code, warning them of the hazard.

ASP.NET also helps prevent security exploits. By default, it prevents the inputting of HTML via posted fields or query string parameters, which helps prevent several common attacks.

Even after well over a decade, PHP still makes is too damn easy to write code that is full of security flaws.

Re:Web apps make it so easy to be insecure. (2, Funny)

madddddddddd (1710534) | more than 4 years ago | (#31048546)

Even after well over a decade, PHP still makes is too damn easy to write code.

fixed that for you....

Re:Web apps make it so easy to be insecure. (3, Insightful)

Anonymous Coward | more than 4 years ago | (#31049188)

Even inexperienced ADO.NET developers know enough to use paramaterized queries. But I'm not fucking kidding you, I still see PHP code even today where the SQL is generated via string concatenation, without properly escaping input from the user.

It gets really annoying listening to people bash php when the primary attack vector is "programmers" who don't know how to code secure applications. SQL queries written via concatenation is 100%, absolutely 100% the fault of shitty coders, and has nothing, *nothing* to do with the language. Sure, the language could force the use of parameterized queries, but that wouldn't prevent ignorant coders from doing other horrible things. The only reason php "appears" to be really bad is because all the people who *are not server-side coders*, who couldn't code in another language if they tried, wind up using php because it's the easiest language to play with and get some code that apparently "does the job".

The reason most other languages don't look so bad is because a larger percentage of the coders who use them are more knowledgable to begin with, and come from a background of writing backend code. It could simply be that 80% of coders who use ADO.NET are actually "programmers", whereas only 30% of php coders fall into that category. A huge percentage of people who use php aren't programmers in any real sense of the word - they just know a little bit of html and then start hacking (in the worst sense of the word) some php code together to shove things into a database.

You rarely find good programmers who know how to properly use php. But that's not because there aren't a lot of good php coders - it's because php is *the* language that attracts all the people who don't have any business writing server-side code. So simply put, php has a much higher ratio of bad coders vs good coders than any other language.

Re:Web apps make it so easy to be insecure. (0)

Anonymous Coward | more than 4 years ago | (#31049650)

The only reason php "appears" to be really bad is because all the people who *are not server-side coders*, who couldn't code in another language if they tried, wind up using php because it's the easiest language to play with and get some code that apparently "does the job".

So you've just proved the GP's claim. It is indeed PHP that allows for these people to write horribly insecure code. If it weren't for PHP, these people wouldn't be creating the messes they make.

Re:Web apps make it so easy to be insecure. (1)

Zapotek (1032314) | more than 4 years ago | (#31051216)

Oh for fucks sake man is that what you understood from the the guy's reply?
I too am sick of PHP getting bashed all the time...And yes I'm a web developer who writes in PHP...
It's easy to write secure code and get the job done and I prefer it because it reminds me of C (my second language of choice).

Yes a lot of people write bad code in PHP, and I've seen my fair share of that...also a lot of people write freaking dangerous code in C.
Should we bash C for making overflows so easy a pitfall?
Because I've never seen a developer make that argument...

What about Java?
A lot of universities teach only Java now, of course someone who only knows Java isn't a real developer is he? So let's ban Java.

Oh! oh! And guns...people shoot each other every day...let's ban guns also...and cars and those pesky doctors with their mistakes and bridges and tall buildings..after all they make it easy for people who want to kill themselves to do so...

Here people...Reductio ad absurdum at it's finest...

Personally I have no problem with bad coders as long as they are not in my team...The overabundance of bad coders means that I can negotiate a better salary...
It's just a matter or perspective...like say....I don't have a problem with gay men...it just means that there are more women left for me....
Everybody just needs some perspective... and basic freaking reasoning skills....

Re:Web apps make it so easy to be insecure. (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31051860)

I too am sick of PHP getting bashed all the time...And yes I'm a web developer who writes in PHP...

I got that far, then stopped reading.

First, you are not a "web developer who writes PHP". If you use PHP, you're nothing but a hack.

Second, by claiming to be a "developer who writes PHP", you've immediately destroyed whatever small degree of legitimacy you and your arguments (which I didn't bother to read) might have had.

Cry "ad hominem" all you want. The fact remains that you openly admit to using PHP. That basically makes you as smart as a dog turd.

Re:Web apps make it so easy to be insecure. (1)

DarkProphet (114727) | more than 4 years ago | (#31051346)

Meh, well including myself, most ADO.Net programmers use parameterized queries because thats what Visual Studio spoon-feeds you. Nothing wrong with that. In PHP i'll concat to my hearts content in a text editor. Does PHP have an IDE that provides the same functionality (really, I don't know).

In the hands of the capable, either are worthwhile languages. What really gets you is when you don't first sanitize your inputs. You can screw that up in any language :-)

Re:Web apps make it so easy to be insecure. (0)

Anonymous Coward | more than 4 years ago | (#31053016)

So you're saying that you don't actually know how to use parametrised queries, you can only do it when you have an IDE to babysit you?

Re:The way we learn languages is the problem (3, Interesting)

b4dc0d3r (1268512) | more than 4 years ago | (#31053272)

I'm going to take issue with this and say the problem is with the internet itself, RAD applications, businesses, and self-taught coders. Allow me to explain.

Half of the .NET code I write is copy/pasted from some other source, because the entire CLR is too complicated for a single person to understand. If I want to do a lookup table, there are a dozen ways to accomplish it, just using the objects provided by the runtime. I don't care how fast it is, unless it's called every page view, so I just google "C# lookup" and get piles of examples. Copy/paste, I'm done. Doesn't matter if it's from MSDN or a Microsoft blog or a random coder blog or wherever else, the code looks good and it works. I have no idea if the example failed to initialize some critical component.

My employer doesn't want to pay me to read, I am supposed to be providing output they can sell to clients/customers. So I don't get a lot of time set aside for training. The way I learned .NET was our tech lead opened up a team meeting and said "I think .NET is the way of the future, is there anyone opposed to going this way?" And the only real objection was it will take longer to produce the next version of our deliverables. Management was fine with that, so we took the leap.

We didn't sit down in a classroom and learn how things are supposed to be done. We didn't get a copy of something like Petzold's Windows bible, or Prosise MFC bible where it goes into depth about what you're doing and what things mean when the IDE puts junk in places for you. Visual Studio 2003 and above make it very easy for you to have no idea what you're doing, and still accomplish something. A quick google search can fill in all of the gaps so you have something functional.

The same with 'Learn X in 24 hours' or 'X for dummies', lots of code samples exclude error checking/handling. Oh yes, MSDN is full of these examples. Sometimes they suggest "error handling has been omitted for clarity", while sometimes it's just assumed. Other times the author has no idea they should be handling errors because it works for them.

So you have piles of coders learning on-the-fly, either because they can't afford the big book or because they have deadlines to meet. Copy/paste something without taking the time to fully understand what's happening, and you get potential problems. In short, easy access to code snippets makes you think you're able to do lots of cool stuff in a new language. Unless you take the time to understand everything you're running, every line of code, you're going to have problems at some point.

Why do you think people still make mistakes like putting form variables directly into SQL? The code snippets are out there, either in the corporate source control or on random blogs. Copy, paste, pwned.

An example, for those of you who wish to tl;dr me you can stop now.

I used MyGeneration templates to come up with database calls for our SQL database, which used Data Access Blocks or some kind of MS best practice to write functions which called stored procedures, so you could essentialy call stored procs exactly like any other function. It generates a call for every stored proc in the database, so you can make fundamental changes to the data structure, re-generate the data access library in a few seconds, and then fix the few calls where the parameters changed.

Very handy, except that the 'execute non-query' template had a bug in it, where the data connection never closed. We never had any problems with this app in production for 3 years. Suddenly in testing, we got a pooled connection exceeded timeout. Turns out the bug only shows up when the call happens most page views, when logging user visits in this case. Other non-query calls happened infrequently enough that they never exceeded the 100 connection default limit, live and in production for 3 years.

Our tech lead found MyGeneration, recommended it, and we used it ever since. Not until last month did I understand how data access happens, and how to use "using" blocks to protect data connections, or add extra try/catch/finally code to ensure closure. All of this despite learning the same lesson in ASP 3.0, where we created our own data access class so this never happened again.

Most times I blame Microsoft, because it's convenient. But the fundamental problem is that so few people teach programming. They document objects and what they do - not when to use one or another, or how to properly initialize/deconstruct it. The typical learning pattern is, I need to access data. Google search. Find code snippet. Now I need to learn more about the object because this sample doesn't do everything I need it to do. Look at the "documentation". Find a method that seems to do what you want to do, and update your code to use it.

I haven't found a good programming reference for .NET yet that goes beyond "here are some objects, here's what they do". Most references document everything about what an object does, but not how to use it. And that's where the system fails. Difficulty: MSDN is getting *less* informational, and more like an object reference.

I assume the same problem happens in other languages, because I've found the same thing in C#, VB.NET, JavaScript (IE and Firefox and agnostic variants), JSON, AJAX, SQL, CSS. I learned C++ from a book by Rob McGregor, which started out very informative and quickly devolved into "Here's some STL headers you might use, the objects contained in them, and a few sentences about each method". One sample program per header I think. Obviously he ran out of time and space.

We need to convince Management that you can't simply hire someone who knows everything about the language. You can do that, but not simply, and definitely not cheaply. We have to set aside real time for real training. Hell, I'm currently re-reading my Assembly Language Step By Step book on the toilet to refresh my understanding of how stuff really works. I used to read my C++ reference in place of bedtime fiction. I read Petzold and Prosise in the car if someone else was driving.

But I grew up, and I don't have time for that. When I get paid to learn, I'll learn. Until then it's cobble things and pray.

Re:The way we learn languages is the problem (0)

Anonymous Coward | more than 4 years ago | (#31055914)

You flat out wouldn't get a job where I work, and if you did, you would be let go within the day. It is possible to hire intelligent, knowledgeable .net coders who know how the use the framework properly. Using statements are *basic* and entirely required knowledge for any self respecting .net developer. You are plain and simply the type of coder that PHP so greatly suffers from on a whole (and from which it derives its poor reputation).

Re:The way we learn languages is the problem (1)

b4dc0d3r (1268512) | more than 4 years ago | (#31114734)

I wouldn't get a job there because they flat out wouldn't pay what I'm worth. You want someone who understands .NET completely, you're paying a lot more than for someone who "can code" in .NET. It is possible, but expensive, and that's my point.

The expense of learning has to be paid by one of:
The employee, raising the minimum salary the employee will accept
The new employer, raising expenses in the form of providing time set aside for training
The previous employer, by allowing the employee to fit in training, making the employee more valuable.

Someone has to pay for training. Ultimately, it will be the hiring company in the form of training materials, time, or pay for experience. Typical .NET or PHP coding type jobs are paying in the range of $25-$60k/year, which is not enough to make me learn the entire .NET CLR on my own time. Catch-22, what do you do? Economic downturn and let the expensive people go, and get the lower paid people make up for it.

"Everything will now be done in .NET, but we're not allowing you training time. Here's the schedule we promised to the client, so fit your training in around that." That's how it works, unless you're into spending money on people.

Re:Web apps make it so easy to be insecure. (2, Informative)

francium de neobie (590783) | more than 4 years ago | (#31049328)

Basically everything about the web today is just one dirty hack upon another bunch of dirty hacks. SSL and TLS are a good example. JavaScript is another. Everything built on top of JavaScript, such as AJAX, is a huge hack. So it's no wonder that it's so damn easy to write insecure web apps.

<Sarcasm>
Basically everything about the Internet today is just one dirty hack upon another bunch of dirty hacks. Ethernet and IP are good examples. TCP is another. Everything that does not limit itself inside a single OSI layer, such as PPPoE, all kinds of VPN and NAT, are huge hacks. So it's no wonder it's so damn easy to exploit remote machines over the Internet.

...

We need to throw it all away. Everyone routinely do this with their plastic bags. We now need to extend that practice to all Internet protocols. We need to start again. But will we? Probably not, and that's quite unfortunate.
</Sarcasm>

No. The reality for our current computing technology is, for anything non-trivial, there's most probably no complete mathematical proof that the system is perfectly secure, and thus there's most probably at least one exploit to break any non-trivial system. Even the most basic of protocols like TCP have been shown to have numerous flaws over the years - most of those specific to implementation details or things that aren't clearly defined in the protocol itself. Even if your theory and system design is perfect, you can still have plenty of errors in the implementation. You build a system that has 1 million lines of code, it only takes 1 line of mistake for someone to exploit it. That one line of mistake can happen in any programming language and any computing environment no matter how rigorous it is. All it takes is 3 seconds of carelessness in the few years you took to implement the system.

Making the system architecture simpler can and do reduce the number of vulnerabilities - although it does not eliminate them. However, throwing it all away is usually not a good idea, unless the current solution is shown to be totally unworkable. The thing with throwing away and starting from scratch is, you're throwing away all the previous fixes embedded in the previous system, and humans are remarkably bad at making sure ALL the previous mistakes do not happen in the redesigned system. Remember, from the tiniest integer being read from the database, to the most grandiose thing you see on the UI, it only takes one single mistake in the billions of operations between your client and the server for a security vulnerability to happen. If you think you can write code for 3 years in professional capacity and no single mistake, no one single typo ever happens in your code - fine. But I don't think I've ever seen such a person before.

Re:Web apps make it so easy to be insecure. (4, Interesting)

RzUpAnmsCwrds (262647) | more than 4 years ago | (#31050552)

But at least when it comes to hardware, we're willing to throw everything away and start from scratch.

Is that why I'm using a pipelined, out-of-order implementation of a 64-bit extension to a 32-bit extension of a 16-bit ISA?

I mean, shit, my Core 2 Duo supports everything from 128-bit vector instructions to segmented addressing. I have USB and PCI Express busses on my ThinkPad, but also CardBus/PCMCIA and a modem. I have Gigabit Ethernet but it is still compatible with 10Base-T. I have a DVI port (through the dock) but also a VGA port. My DVD-RW will read CDs which are 30 years old.

Re:Web apps make it so easy to be insecure. (1)

phy_si_kal (729421) | more than 4 years ago | (#31053342)

There has been some approaches to deal with web programming, redone. I'm thinking about OPA at http://mlstate.com/ [mlstate.com] in particular.

Every system is different (1)

MichaelSmith (789609) | more than 4 years ago | (#31047880)

Take buffer overflows for example. You can build a generic tool to create buffer overflows by feeding in long messages but there is no generic way to exploit the overflow, because every system arranges its data differently.

BTW there is a typo in the summary pitted eah scanner

Re:Every system is different (2, Funny)

truthsearch (249536) | more than 4 years ago | (#31047924)

When I first quickly ran through the summary I read that as "I pity the scanner". After re-reading the summary it seems appropriate.

Re:Every system is different (1)

John Hasler (414242) | more than 4 years ago | (#31049612)

> I read that as "I pity the scanner".

Because they live in vain?

Re:Every system is different (1)

gandhi_2 (1108023) | more than 4 years ago | (#31047966)

So we should probably be thankful that "web app scanners catch half of vulnerabilities".

Re:Every system is different (1)

ls671 (1122017) | more than 4 years ago | (#31047988)

> buffer overflows for example

What do you mean ? The platform we use checks for array/buffer bounds on any access to them. We also use a persistence tool that is pretty good at preventing SQL injections.

It sure beats scanning on the efficiency level ;-))

Re:Every system is different (1)

Arancaytar (966377) | more than 4 years ago | (#31048612)

SQL injection is only one half of web security, though. The other big threat Cross Site Scripting/Request Forgery.

Output filtering is a lot harder to keep track of than database queries (even with a good templating system) since data must be filtered in different ways: Some stuff may contain any markup including document-level elements, script/stylesheet inclusion, other data may only contain local formatting markup (emphasis, etc.) and still other stuff has to be escaped entirely...

Re:Every system is different (-1, Troll)

Anonymous Coward | more than 4 years ago | (#31049016)

Are you fucking serious? No, really. Are you fucking kidding me? ARE YOU?

You truly think "web security" consists of nothing more than avoiding SQL and XSS injection attacks? ARE YOU FUCKING KIDDING ME?

What about SECURING YOUR GODDAMN WEB SERVER(S) ? HUH? What about that? Is that the third "half" that you forgot? And I'm not just talking about the HTTP daemon itself. I'm talking about whatever operating system you're running on those servers. I'm talking about whatever you use to control remote access and physical access to those servers. Get my drift? GET IT?

What about your web application itself? HAVE YOU THOUGHT OF THAT? Let me guess, you got it developed in India, didn't you? Did you do a code review? DID YOU DO A FUCKING CODE REVIEW OF THE WEB APP YOU GOT BUILT IN INDIA? DID YOU? No, you didn't. That's probably your biggest fucking security hole. How do you know that they're not siphoning off credit card information, or other private information belonging to your customers and web site users? YOU DON'T, BECAUSE YOU DIDN'T DO A CODE REVIEW!

Are you using PHP? I said, ARE YOU USING PHP? If you are, then your web site's security is fucked to Hell and back.

Have you ensured that you don't have anyone intercepting your network traffic? HAVE YOU? WHAT NETWORK AUDITING HAVE YOU DONE? Yeah, that's right. NONE.

Pathetic little worms like you sicken me. Your complete shortsightedness and lack of knowledge about security leads to all of the problems we have today.

Re:Every system is different (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#31050808)

Have you tasted chocolate or strawberry smoothies? They're awesome. Ponies are awesome. I have a pink pony. It's cute. He is my best friend.

The cat and mouse game. (2, Interesting)

nuckfuts (690967) | more than 4 years ago | (#31047996)

No vulnerability scanner will ever detect 100% of the vulnerabilities possible. They're still very useful, however, because no website is going to have 100% of all the vulnerabilities possible.

Think of it another way. If your website has only 1 vulnerability and the scanner detects it, then it's 100% effective.

If your website has only 1 vulnerability and no scanner detects, score 1 for the bad guys. The cat and mouse game continues.

Re:The cat and mouse game. (1)

Dumnezeu (1673634) | more than 4 years ago | (#31048080)

If your website has only 1 vulnerability and no scanner detects, then it's 100% ineffective. The cat and mouse game continues.

FTFY

Re:The cat and mouse game. (1)

francium de neobie (590783) | more than 4 years ago | (#31049466)

OK. I'll bite.

Let's say 50 years later RSA is found to be completely insecure. So mathematically anything based on RSA, depending on RSA or containing RSA has at least 1 vulnerability. Yet today it's impossible for any security scanner to tell whether somebody's SSL has that particular vulnerability or not. But as it stands, as long as that vulnerability is not known by anyone, nobody can exploit it. So you can't say the security scanner is n% ineffective because it doesn't detect a vulnerability.

Besides, for any non-trivial system, it's unlikely that the total number of vulnerabilities can be exactly determined in any practical timespan. Still unsolved theoretic problems, like the hardness of integer factorization, is one good example. Then there're always surprises in things outside of your control - your code may be theoretically perfect, but your CPU has one vulnerability that becomes exploitable when you put a certain -O flag to gcc. And it turns out there's a very devious way to remotely exploit that from your code. And then human mistakes comes in all strange forms that I don't think you can mathematically enumerate them completely.

Re:The cat and mouse game. (2, Insightful)

ls671 (1122017) | more than 4 years ago | (#31048092)

> If your website has only 1 vulnerability and no scanner detects, score 1 for the bad guys.

except that the "bad guys" mostly use scanners to discover holes ;-))

So interestingly enough, holes detectable with scanners are more exploited.

Re:The cat and mouse game. (3, Informative)

JWSmythe (446288) | more than 4 years ago | (#31048166)

    From what I recall doing this for sites that handled credit card processing (me being in the tested side), those tests are pretty much worthless.

    If you had 1 vulnerability, you'd get pages of false positives or irrelevant information. I recall a particular 10 page report we got back that we were advised to fix or we'd fail on. The only item to fix was the version of the web server was just one behind current. The changelog indicated that it was to fix a vulnerability on a different platform, so it was completely unrelated to us. We'd frequently have points marked off because we couldn't be pinged or portscanned. I'd have to open the firewall up to them, just to be scanned. Our security would identify an attempted port scan as a hostile action, and react by dropping all traffic from them. Sorry my security stopped your scanning, but that's the intention of it. {sigh}

    After opening the firewall to them, and changing the version number on the web server (there were reasons we couldn't do the trivial upgrade), we passed with flying colors.

    For them, they were interested in the version numbers handed off by the server, not what they actually were. For example, if it was Apache, we could have it report Apache version 9.9.9, and that would have made us pass on that part without fail for years.

Re:The cat and mouse game. (1)

merreborn (853723) | more than 4 years ago | (#31053592)

The only item to fix was the version of the web server was just one behind current. The changelog indicated that it was to fix a vulnerability on a different platform, so it was completely unrelated to us...

After opening the firewall to them, and changing the version number on the web server (there were reasons we couldn't do the trivial upgrade), we passed with flying colors.

        For them, they were interested in the version numbers handed off by the server, not what they actually were. For example, if it was Apache, we could have it report Apache version 9.9.9, and that would have made us pass on that part without fail for years.

For anyone who isn't familiar with this stuff, there are reasons beyond those stated by the OP that make this "apache version number must be current" policy moronic.

There are the obvious, stated ones:
  1. You can just change the version number apache reports
  2. The latest version may not fix anything meaningful
  3. The latest version may actually introduce problems

Another, less obvious reason this is stupid:
  4. Distributions like Debian and Redhat release a single version of apache, and then continue to use it for months or years, backporting security patches ASAP. So your version number may *say* you're 12 months behind on patches, but in reality, you're only 12 months behind on functional changes; you've basically got all the bleeding-edge security patches, assuming you're keeping current with the distro-provided packages.

And of course, if you build from source, you may be doing the exact same sort of backporting yourself.

Re:The cat and mouse game. (1)

JWSmythe (446288) | more than 4 years ago | (#31054348)

    You're very correct on that.

    For a while, I was doing that with a few things, including Apache and the Linux kernel. There were pieces I needed that didn't progress, so I handled my own backporting of various things. That was a long time ago, and those problems were resolved in more current versions, so it hasn't been necessary for years.

    But, if you're using say mod_ssl to handle your SSL on Apache, and you're still in the 1.3.x tree, you'd now be scored down. Apache just moved the 1.3.x tree to 1.3.42 [apache.org] (which was mentioned on here recently), but mod_ssl [modssl.org] only has their patch for 1.3.41. I haven't checked to see if they're compatible yet, but for the sake of argument, lets say that it isn't. If I had these in production, and I didn't upgrade to 1.3.42, I'd now score badly, even if I applied the security patch which is what the difference between 1.3.41 and 1.3.42 is. All I'd have to do is ask it to say it's Apache 1.3.42, or even say something stupid like IIS 7.5, if I really wanted to throw off any attackers. Sometimes it's better to announce the wrong thing, just to distract potential attackers. By announcing IIS, they'd try their suite of Microsoft attacks, rather than Linux attacks.

    Oh, and god forbid you were to do a little honeypot action on your production machines. If you were to put a daemon listening to port 23 (Telnet), to automatically block potential intruders (Connected to port 23? Set an iptables rule immediately), they'd see that port 23 was open, and pitch a fit. That's actually a good security idea, although I don't see it used much in the real world.

Re:The cat and mouse game. (3, Insightful)

ircmaxell (1117387) | more than 4 years ago | (#31048302)

To tell you the truth, the percentage of actual vulnerabilities that it finds mean nothing to me. What matters to me is the rate of false positives. Even better would be the number of actual vulnerabilities found divided by the number of false issues found.

I had a chance to see the outputs of a few of these scanners run against a particular open source content management system. Not one of them found an actual, confirmable vulnerability. But one found over 9,000 false positives. All found a fair number of false positives. Even if could find real vulnerabilities, digging though all those false positives to find a real one is a really daunting task.

What I find works better than these scanners is hand audits by someone who knows what they are looking for. It's most definitely an intensive task, but let me ask you. What's more a better use of time, an expert doing a hand audit who may find vulnerabilities that the scanner didn't), or the expert digging through all 9000 of those "results" trying to figure which, if any, are real? I assert that the best use is going to be a combination of the two. Just don't put your faith in either one...

Re:The cat and mouse game. (1)

nuckfuts (690967) | more than 4 years ago | (#31048916)

Any scanner that returned over 9,000 false positives is clearly a joke. So you ignore it and run a better scanner.

I've dealt with scans for credit card purposes and for the most part I thought they were pretty good. Yes, there was the occasional false positive. So you provide some explanatory information and then you get passed. False positives are preferable to false negatives.

It's simply unrealistic to expect these scans to yield perfect results. Despite that, I believe they're useful.

Whitehat Security (1)

MandoSKippy (708601) | more than 4 years ago | (#31048026)

I noticed Whitehat Security Declined to participate. I wonder why that is? We just purchased there service, I like there concept, especially as they sold it, we haven't gotten into full use of the product yet, but I can tell you some of the execution of there service could be improved. There seems to be a little bit of a disconnect between the sales force and the operations team. I would have been very interested to see how they fare in a test like this.

Re:Whitehat Security (4, Funny)

ls671 (1122017) | more than 4 years ago | (#31048102)

> There seems to be a little bit of a disconnect between the sales force and the operations team

No kidding, I can't believe that such a company exists ;-)))

Re:Whitehat Security (1)

icepick72 (834363) | more than 4 years ago | (#31049004)

"their", "Their", "THEIR"!
Sorry, my annoyance level kept rising each time I saw it. Had to scream it in CAPS.

Re:Whitehat Security (0)

ColdWetDog (752185) | more than 4 years ago | (#31051232)

It's best if you loose your anger.

Re:Whitehat Security (1)

yahwotqa (817672) | more than 4 years ago | (#31059384)

Its best if you loose you're anger.

PCI compliant is meaningless? (1)

CVD1979 (718352) | more than 4 years ago | (#31048056)

"Is it any wonder that being PCI compliant is meaningless from a security point of view?"

Where's that quote from? I can't find it on either the page or in the PDF...

Re:PCI compliant is meaningless? (2, Informative)

julesh (229690) | more than 4 years ago | (#31048258)

Where's that quote from? I can't find it on either the page or in the PDF...

It's the submitter's opinion. And it's quite accurate: no such standardized set of requirements can guarantee security, because security is much more complicated than the simple kinds of rules that you can include in them. PCI compliance gives the illusion of security where it may well not actually exist at all.

Re:PCI compliant is meaningless? (1)

bearsinthesea (1619663) | more than 4 years ago | (#31048884)

And it's quite accurate: nothing can guarantee security.

FTFY. There is no perfect security. I don't know anyone that says PCI compliance guarantees you are secure. But it is an indication of the controls you have in place protecting cardholder data.

For instance, hiring a licensed, bonded plumber doesn't guarantee they won't screw something up. But your chances of a good outcome are a lot better.

Re:PCI compliant is meaningless? (1)

maxume (22995) | more than 4 years ago | (#31048292)

Try a little harder, the attribution is just before it (apparently that is the submitters opinion).

PCI Still Important (3, Interesting)

savanik (1090193) | more than 4 years ago | (#31048060)

The key message here is that simply testing your web site with a vulnerability scanner doesn't make it secure. Well, duh.

PCI is still important because before the guidelines, most people weren't scanning their web sites at all. Even when they knew how - they couldn't convince management it was worth the trouble, time, dollars, and so on. And without scans, the number of discovered web vulnerabilities approaches 0%.

PCI isn't just about scanning your website, either. There's hundreds of things you have to do to secure everything from the physical layer up to the application layer. And having PCI be required to process credit cards makes everything much more secure. I'm talking about small businesses so cheap they don't want to put LOCKS on the doors between the outside world and the servers holding your plain-text, unencrypted credit card numbers, and who don't have the expertise to set up a web camera on their own building.

You might not like PCI, it might be inconvenient, but it's necessary to protect the general public.

Disclaimer: I am an information security professional.

Re:PCI Still Important (3, Interesting)

trentblase (717954) | more than 4 years ago | (#31048236)

You might not like PCI

The only thing I don't like about PCI is the acronym they chose.

Re:PCI Still Important (1, Interesting)

Anonymous Coward | more than 4 years ago | (#31050400)

Scanning vendors are quite good at discovering non-issues such as the availability of "weak" SSL ciphers and known problems in technology stacks. They are unfortunately useless when it comes to discovery of application level security problems.

Its all just a big scam where people pay lots of money to companies who mis-represent actual PCI compliance requirements, hit their servers with Nessus and print out a security audit pass certificate the company can hang on their wall. Its about as useful and seedy as the SSL market has become in recent years.

Look at the nonsense in the PCI-DSS and you'll see that it was written by individuals without strong security backgrounds.

For example they explicitly suggest the use of "secure" one way hash algorithms for storage of card data. It doesn't matter how good a fricking hash algorithim is when the entropy of the entire possible card space is less than 10 trillion!!

They provide password complexity guidelines including changing passwords often. In practice we see all the time that all this does is increase the chances of people writing them on sticky notes and pasting them to their monitors.

Finally we have the omnipresent virus scanning and firewall checklists. These systems do nothing to protect the application and are incapable of providing security guarantees but MGMT loves them because they get to tick off the firewall and virus checkboxes and then its just off to continue behaving stupidly as ususal with sensitive information.

Its better than nothing but it really needs to be reviewed by a disinterested third party with a real security background.

scanners == scammers (1)

sohp (22984) | more than 4 years ago | (#31048064)

A vendor will sell you, or often give you a free trial of, their vulnerability scanning tool. They will then turn right around and sell you a tool that is supposed to fix those problems. Does anyone else see a problem with that? One reason I prefer the FOSS tools going back to Nmap and SATAN is that they do what real intruders try to do, not what some marketing department wants them to do as a way to scare you into buying stuff.

Being "compliant" with a standard is meaningless (1)

Opportunist (166417) | more than 4 years ago | (#31048096)

At least when it comes to security. By the time any standard is published and a test suit is assembled, the whole threat scenario has changed by 180 degrees. We're dealing here with an industry that has a half-life period of its knowledge of about 3 months. Not the usual 2-3 years anywhere else in IT.

Don't be compliant. Either get up to speed with curent security problems or hire someone who does. Standards are worth jack, at least from a security point of view (they're still quite valuable to get contracts from companies who have been BSed into believing in the standards themselves).

Re:Being "compliant" with a standard is meaningles (1)

bbn (172659) | more than 4 years ago | (#31048338)

Don't be compliant.

How stupid is this? PCI is a set of minimum requirements. It is all stuff that any competent admin would have done even without the standard. If you are as cool, as you apparently think you are, you will be compliant with zero work.

Re:Being "compliant" with a standard is meaningles (1)

Opportunist (166417) | more than 4 years ago | (#31049022)

Ok, maybe I should have written "don't try to be compliant, try to be secure". I thought I'd get a reply like that one right the moment I hit submit...

Yeah, yeah, I'll use preview more now, promised.

What I wanted to express is that managers usually don't care for security. The ONLY reason my last company finally implemented bare minimum security was that they needed a certificate to get a fat contract. There was literally no other motivation for them. And, again, they only did the bare minimum of what was entirely necessary to get the cert passed. When (not if) that security theater gets finally debunked, we'll have another press conference and the CISO gets fired, a promise is made to improve security, a few k USD are blown on studies and as soon as the press has another victim, everything is pigeonholed again.

I decided to go rather than get fired.

missed the point (1)

Lord Ender (156273) | more than 4 years ago | (#31048098)

This guy is trying to hype his own findings a bit too much. Removing half of the vulnerabilities is actually really good! If you happen to remove the vulnerability that some mass-defacement takes advantage of, you really did ad a lot of value by using the imperfect scanning tool.

One of the most common and least helpful fallacies about security is that something is either secure or it is not. Nothing is 100% secure. Removing half of the vulnerabilities is a huge improvement over removing none.

App scanners don't make you secure (2, Insightful)

mysidia (191772) | more than 4 years ago | (#31048172)

Scanners exist because people want scanners, and so people can sell a product labelled "security scanner". And get a feel-good (false) sense that everything is secure when the scanner reports no issues.

This idea started with the general idea of vulnerability scanner, tools designed to scan hosts for open ports, check software versions, and try exploits against known issues.

The problem with all of them is they can only detect anticipated vulnerabilities.

Unknown vulnerabilities are not properly detected by scanner, because they cannot be anticipated by software.

Much like Antivirus, they need pattern updates and a re-scan when new issues are discovered. Sometimes they don't get updated at all -- sometimes new vulnerabilities are discovered, but a test doesn't get created for the scanner.

Sometimes hackers become aware of security vulnerabilities that the maker of the scanner doesn't become aware of.

Sometimes the hacker can analyze the app you are running (which is industry-specific, not common), and tailor an attack against you, that the scanner vendor could never anticipate.

So are scanners worth something? Sure. But usually not nearly as much as the software vendor bills for them -- they are more fallible than even virus scanners (at least viruses, and malware are finite in number, even if a very large number --- there are more potential security vulnerabilities than one could possibly imagine).

Re:App scanners don't make you secure (1)

perlchild (582235) | more than 4 years ago | (#31048526)

A quicker fix would be to get the vendors to share their corpus of vulnerabilities, probably through the Owasp group. That way at least, they'd all get a complete picture of vulnerabilities to scan for, and they might improve. Right now, it seems the resources applied to getting the vulnerabilities in are so minute, compared to the amounts of effort performed by two groups:
a) hackers trying to find vulnerabilities that had been unknown before
b) the people paying for their service's expectations(I used to work for such a company, and I would have been scandalised if you had told me ANY company would have sold a product at the prices I saw that didn't get a 90% without training)

Re:App scanners don't make you secure (0)

Anonymous Coward | more than 4 years ago | (#31049946)

Code auditing is quite expensive. If you cut the auditing time by an amount proportional to the costs of the software I am quite sure that is a significant improvement.

Using automatic tools for analysis is never meant to be a replacement for real analysis. But every issue handled by an automatic analysis tool is an issue you don't have to handle yourself.

Simple fix: double the reported value (2, Funny)

noidentity (188756) | more than 4 years ago | (#31048234)

If these scanners report only half the vulnerabilities, they just need to double the reported number. Simple fix, really.

Did everyone miss they tested against NTO site? (0)

Anonymous Coward | more than 4 years ago | (#31048342)

Did everyone miss the statement they made

they were testing against NTO's own website.

omfg, every other scanner performed poorly against a specially-constructed site
that was put together by the "winner" in the results!

wow.. that's amazing that their own product performed best. who woulda thunk it!

and later in the news: water is wet!! /s

What is it about again? (1)

dvh.tosomja (1235032) | more than 4 years ago | (#31048478)

I read TFA because summary does not make sense only to find out that TFA does not make sense.

"Hold still while we scan you" (2, Informative)

Hero Zzyzzx (525153) | more than 4 years ago | (#31048722)

My favorite from a past employer - one of these PCI scanning companies asked us to take down our iptables rules for a set time period while they scanned us. That's right, they wanted us to be less secure while they checked how secure we were.

We were eventually able to get an ip range from them, but not until we fought them a bit. They *would not* do the scan unless we took down our firewall. I wanted to just REJECT everything but 80 and 443 and not tell them, but the higher-ups told me to play along.

Anyway - the whole idea felt really ... wrong. And they didn't point out anything useful, either.

Re:"Hold still while we scan you" (0)

Anonymous Coward | more than 4 years ago | (#31049784)

I think ''eval 'cat /etc/passwd'

Re:"Hold still while we scan you" (1)

DavidTC (10147) | more than 4 years ago | (#31049804)

Well, I can sorta see their point in saying 'You have to give us the permissions of the least restrictive IP you have'.

But it's actually still dumb. The only IPs in my firewalls (Besides the mail server which has temp blocks for spammers) are the IPs of my other servers, so I can restrict specific things to them.

The only two that I can think of are access to the mysql port (So other servers can use a database), and access to a special mail submission port without any other security on it. (So my web servers can easily send email. I used to this via postfix checking IPs, and then I figured, why even let the wrong IP connect?)

Granted, there is a possible security issue there, but these protected services are either still password protected, like the mysql one, or not actually huge risks, like the mail server. (I wouldn't want to be spewing spam, but it's hardly going to result in someone stealing customer CC numbers.)

I firewall those simply because I don't think it's good to expose unneeded services to the internet, but they're hardly insecure.

But if I were in charge of security, and I had, for some reason or another, needed to run an vulnerable service and protect it behind a firewall except for a few IPs, I don't think that's a particularly large problem. Need to make sure everyone who can mess with the firewall knows that port must always be firewalled, but that's about it.

I wonder how they deal with Windows machines, which essentially always have insecure ports, and just have a firewall in front of them.

Re:"Hold still while we scan you" (0)

Anonymous Coward | more than 4 years ago | (#31049830)

It's not as dumb as you may think. Security is based on a layered approach. If your firewall was down for some reason then the next layer of "security" would be your web app security. So that has to be secured as well. As long as in their report they point out explicitly that they could not bypass the first layer of security then you should be fine with that. By your logic if I had a big iron gate and a massive fence all around my property then I wouldn't need locks on my windows or doors!

firewall was down (1)

viralMeme (1461143) | more than 4 years ago | (#31051972)

"It's not as dumb as you may think. Security is based on a layered approach. If your firewall was down for some reason then the next layer of "security" would be your web app security .."

The Six Dumbest Ideas in Computer Security [ranum.com]

Let me introduce you to the six dumbest ideas in computer security. What are they? They're the anti-good ideas. They're the braindamage that makes your $100,000 ASIC-based turbo-stateful packet-mulching firewall transparent to hackers. Where do anti-good ideas come from? They come from misguided attempts to do the impossible - which is another way of saying "trying to ignore reality." Frequently those misguided attempts are sincere efforts by well-meaning people or companies who just don't fully understand the situation ..

one of these PCI scanning companies (1)

viralMeme (1461143) | more than 4 years ago | (#31051954)

> one of these PCI scanning companies asked us to take down our iptables rules for a set time period while they scanned us

Can you gave examples of companies that scan companies in the manner you describe. My understanding is that to achieve PCI compliance, you fill in a bunch of forms. I mean Heartland Payment Systems were PCI compliance, and look what happened to them.

Re:"Hold still while we scan you" (0)

Anonymous Coward | more than 4 years ago | (#31054832)

Would that be Trustwave, by any chance?

Isn't security scanning... (0)

Anonymous Coward | more than 4 years ago | (#31048738)

... Turing-reducible to the halting problem? That is, the conclusion that they miss half the vulnerabilities should be obvious.

and, again, this is only against their own tests (2)

anton_kg (1079811) | more than 4 years ago | (#31048970)

Don't forget these results supposed to be 100% because their own test application has been scanned. It means an actual results will be much lower against a real application.

Incorrect title is just FUCKING blanket statement (0)

Anonymous Coward | more than 4 years ago | (#31051422)

NTOSpider found over twice as many vulnerabilities as the average competitor having a 94% accuracy rating.

Doesn't sound exactly like ALL of them missed 50% of vulnerabilities. If I hadn't continued reading, I'd have thought that all scanners are useless.

being PCI compliant is meaningless (1)

viralMeme (1461143) | more than 4 years ago | (#31051940)

> being PCI compliant is meaningless from a security point of view? You can perform a Web app scan, check the box on your PCI audit, and still have the security posture of Swiss cheese on your Web app!"

Print this out and stick it on the wall, for the next time your PHB starts waffling on about compliance .. :)

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...