Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Over Half of Software Fails First Security Tests

Soulskill posted more than 4 years ago | from the over-half-of-users-are-oblivious dept.

Security 145

An anonymous reader writes "Even with all of the emphasis on writing software with security in mind, most software applications remain riddled with security holes, according to a new report released today about the actual security quality of all types of software. Close to 60 percent of the applications tested by application security company Veracode in the past year-and-a-half failed to achieve a successful rating in their first round of testing. And this data is based on software developers who took the time and effort to have their code tested — who knows about the others." Reader sgtrock pointed out another interesting snippet from the article: "'The conventional wisdom is that open source is risky. But open source was no worse than commercial software upon first submission. That's encouraging,' Oberg says. And it was the quickest to remediate any flaws: 'It took about 30 days to remediate open-source software, and much longer for commercial and internal projects,' he says."

cancel ×

145 comments

Sorry! There are no comments related to the filter you selected.

That's great. (3, Insightful)

cbiltcliffe (186293) | more than 4 years ago | (#31330758)

Now they need to test the users.....

Re:That's great. (0)

Anonymous Coward | more than 4 years ago | (#31330964)

Close to 60 percent of the applications tested by application security company Veracode in the past year-and-a-half failed to achieve a successful rating in their first round of testing.

What I'm shocked to hear is about 40% of software does pass the first round of testing. The bar must be rather low...

Re:That's great. (2, Interesting)

TrisexualPuppy (976893) | more than 4 years ago | (#31331166)

Why is this such a shock to you?

For secure software, isn't it just a bit subjective? These tests were submitted by people who NEEDED to have their software tested. Much of the software out there doesn't deal with sensitive data, and much of it is too simple to serve as a system security risk, and it isn't submitted. So you this 60% figure doesn't really mean much. Most software isn't submitted for security checks and never needs to be.

This article is FUD, and the necessary details are not explained. Methinks that Veracode was just trying to get a little publicity. Thanks again, Soulskill!

Re:That's great. (1)

k8to (9046) | more than 4 years ago | (#31331374)

Sure, software security doesn't matter, if the software never takes external inputs that could be controlled by a third party.

That's a good, what, 2% of software?

Re:That's great. (3, Insightful)

Anonymous Coward | more than 4 years ago | (#31331548)

Your viewpoint is a little close-minded. Most software written is never even sold. It is mainly in-house custom apps in companies where it would be pointless to try to exploit it because there are easier ways to get the data. And how about the software that runs completely closed on microcontrollers that are in every single product sold today?? Think before you post. :)

Re:That's great. (0)

Anonymous Coward | more than 4 years ago | (#31331784)

Depends on how you measure software - time*instances in use or per program existing. 99.9% of software by the later standard fits that description, though probably 5% consists exclusively of "hello world", etc. For professional applications, sure it matters, but for internal stuff where you are in control of all input, securing it from yourself is a waste of time.

MOD PARENT UP (0)

Anonymous Coward | more than 4 years ago | (#31331690)

you know you want to

Re:That's great. (2, Interesting)

jsebrech (525647) | more than 4 years ago | (#31331770)

These tests were submitted by people who NEEDED to have their software tested.

I think the software submitted for testing is actually more secure than the average software, because it's made by people who actually know about the problem.

Much of the software out there doesn't deal with sensitive data, and much of it is too simple to serve as a system security risk

All web sites need to have good security. Without good security, you can get all sorts of hijacking attacks, where systems that seem harmless are abused to mount attacks on more sensitive systems.

The biggest problem with security is the degree it is underestimated. Everyone thinks it's somebody else's problem. Collectively though, the web is a one huge gaping security hole, and it's because of this attitude.

Most of the books on web development I've opened up contain security holes in the code samples. Even something as basic as SQL injection is still very prevalent in the code samples you find online and in print. Things get much worse when you start talking about subtler flaws like XSS or CSRF. And don't even get me started on the programming forums...

This article is most definitely not FUD.

Re:That's great. (1)

Bert64 (520050) | more than 4 years ago | (#31332092)

Also, how did they come to be testing open source software... Was it submitted by the authors for testing? I suspect not, since the testing process will cost money... Or did they simply download it and decide to test it arbitrarily? If the latter, then the open source software in question doesn't fall under the "needed their software tested" category...

Re:That's great. (1)

gehrehmee (16338) | more than 4 years ago | (#31332180)

The flip-side of course is that if the company is submitting their code for security checking, they're paying at least some attention to security. The company that doesn't care may have many many more vulnerabilities.

Re:That's great. (3, Funny)

Volante3192 (953645) | more than 4 years ago | (#31331200)

Of course, 60% of the apps they tested were web applications, leaving 40%...

(Yeah, yeah, it's unlikely that the only apps that failed were web apps, I just thought it a spiffy coincidence that the % of apps that failed testing also equaled the % of web apps tested.)

Re:That's great. (2)

AlecC (512609) | more than 4 years ago | (#31331456)

Why? Surely, if I was going to send my software for external security testing, I would first test it in house, both more cheaply and less humiliatingly. This is not 60% of all software failing, this is 60% of all software sent for test failing. This suggests that in-house testing is remarkably badly thought through.The sorts of the sorts of tests that Veracode are going to do should be predictable - shouldn't you run them yourself before submitting the software?

Re:That's great. (1)

HungryHobo (1314109) | more than 4 years ago | (#31331898)

Really?
If anything I'm more stunned that 40% of code sent in didn't have any major flaws.
It isn't easy to write secure code.

I do some amateur testing for some of my friends web apps (looking for all the common ones which I'm fairly familiar with, SQl injection, XSS, various code execution fuckery etc) and it's rare to be able to hand back a list shorter than my arm and that's when they're actually writing code with security in mind.

Re:That's great. (1)

AlecC (512609) | more than 4 years ago | (#31332024)

Then maybe you should be offering your services to all these these companies with failing software. If you can do such testing, so can the creators of the software. Yes, it needs a test team who are not the original designers - but so, frankly, does any software. The designer can never see the faults in their own baby. This result suggests that the companies creating the software are just assuming "it will be alright on the night" - which is a recipe for disaster in an environment of positive attacks. It is pretty bad when your only enemy is Murphy's law: when you have an intelligent enemy, code-and-hope does not fly.

Re:That's great. (1)

ka9dgx (72702) | more than 4 years ago | (#31331148)

Testing the users might make sense if the Operating System had a reasonable security model. If you can't easily restrict a program to a small subset of your machine, you're forced to trust code you didn't write to get anything done.

Nobody should blame the users, if the OS sucks.

Re:That's great. (0, Flamebait)

Anonymous Coward | more than 4 years ago | (#31331262)

like an operating system that keeps ALL settings, from kernel level security stuff to the users font setting in a game program in the SAME FILE.

Honestly, the registry is an ABORTION and a very stupid idea, yet Microsoft wont let it go. /etc settings that is system protected and ./etc in each user's directory that they can save their changes to the core setting found in /etc

It's not hard, Windows adopting several of the used forever policies in Unix would have saved them a metric buttload of grief over the past 2 decades.

Reasonable security model... How about any security model at ALL??!?!!?!?

Re:That's great. (2, Interesting)

ka9dgx (72702) | more than 4 years ago | (#31331576)

Yes, the registry sucks, for many reasons.

Yes, better defaults could have been chosen 2 decades ago.

Now things have changed, and any system that doesn't let limits get set per task is insufficient. The current choices now are insuring 2 more decades of pain. I'm trying to educate people on the better options available, so that a better choice gets made.

It's now necessary to think of security with a much finer grain. The user is no longer the natural dividing line. It needs to be per task instance.

Re:That's great. (1)

ClosedSource (238333) | more than 4 years ago | (#31331590)

Of course MS isn't going to let go of the Registry - it would break most applications. No doubt it would be great for Linux because Windows wouldn't be able to run Windows applications either.

Re:That's great. (1)

Bert64 (520050) | more than 4 years ago | (#31332170)

Compatibility (with their own lockin) is their biggest selling point, but because of design flaws (like the registry and countless others) also their biggest weakness...

Re:That's great. (2, Informative)

ClosedSource (238333) | more than 4 years ago | (#31332700)

I agree to the extent that compatibility is an important selling point and it also limits their ability to change their OS.

I'm not so willing to concede that the registry is an example of a design flaw. You have to consider the design within its context. For an explanation of why the registry was created and a discussion for and against it see http://blogs.msdn.com/oldnewthing/archive/2007/11/26/6523907.aspx [msdn.com]

Re:That's great. (1)

TheLink (130905) | more than 4 years ago | (#31332154)

> Honestly, the registry is an ABORTION and a very stupid idea, yet Microsoft wont let it go. /etc settings that is system protected and ./etc in each user's directory that they can save their changes to the core setting found in /etc

The registry has ACLs. That's why many badly written games/apps need users to run as administrator - they need the permissions to change parts of the registry that they shouldn't be changing.

And there's HKCU vs HKLM.

Windows also has %USERPROFILE%\Local Settings and %USERPROFILE%\Application Data

The registry approach has its disadvantages, but you're not stating them.

Re:That's great. (1)

julesh (229690) | more than 4 years ago | (#31333032)

like an operating system that keeps ALL settings, from kernel level security stuff to the users font setting in a game program in the SAME FILE

Err... kernel level security stuff is generally in HKEY_LOCAL_MACHINE, which is stored in %windir%\system.dat. User fonts settings in a game should be in HKEY_CURRENT_USER, which is stored in %userprofiledir%\ntuser.dat. Totally different files.

Re:That's great. (2, Interesting)

TheLink (130905) | more than 4 years ago | (#31332712)

> If you can't easily restrict a program to a small subset of your machine, you're forced to trust code you didn't write to get anything done.
> Nobody should blame the users, if the OS sucks.

Agreed. And most OSes out there suck in this respect (OSX, Linux, Windows).

FWIW Windows Vista and Windows 7 kinda suck less - since they actually have some sandboxing with IE8.

Ubuntu has apparmor sandboxing of firefox as an option that's turned off by default, and even if you turn it on it's not sandboxed enough IMO (firefox can read and write almost anything in the user's home directory with the exclusion of just a few directories).

As it is, most users are either forced to:

1) Solve a version of the Halting Problem where they don't and can't know all the inputs and are unable to read the source code (or even know if that's really the source code of the executable they are about to run ;) ).

2) Use only software from a Trusted Vendor's repository. Not a good strategy for Microsoft given their Monopoly Status, and this approach/philosophy doesn't actually help the OSS cause that much either.

You can say "download the source and compile it yourself", when even experts have difficulty finding flaws in the software, how would users find them (see also 1) ).

Users will just skip the pointless steps and go to "make install" (which often requires root permissions).

As it is I have proposed that applications request for the sandbox they want to be run in. Then the O/S enforces the sandbox.

It's easier to figure out the danger the application poses, if you require applications to state up front the limits of what they want. If they say "No Limits" you can assume you don't want to run it.

The sandboxes can be from a shortlist of template sandboxes, or custom sandboxes which are signed by trusted parties.

Organizations could have Trusted 3rd Parties audit the application's proposed sandbox and sign it if they believe it's OK.

It is much easier to audit a sandbox than audit thousands of lines of code.

Furthermore the code audit results will be invalidated if the program can update itself online, or can possibly fetch new instructions from the Internet. Whereas the sandbox audit would still be valid.

For example, without sandboxing, a code audited program might fetch new instructions and decide to turn on your webcam without your permission. In contrast if the sandbox doesn't allow the program to access the webcam, the program isn't going to be able to access the webcam even if it fetched new instructions.

Unless of course there's a bug in the sandboxing. But at least this means you can concentrate more resources on getting the sandbox and O/S bugs fixed, rather than try to get the dozens or hundreds of programs security audited and reaudited everytime there's a new update.

What emphasis on security? (4, Insightful)

Jurily (900488) | more than 4 years ago | (#31330778)

I thought the only measure of a project was whether it makes the deadline.

Re:What emphasis on security? (1)

Paspanique (1704404) | more than 4 years ago | (#31332202)

Nope... the only measure is cost...Anything else is secondary...

Bolting On (3, Insightful)

Chris Lawrence (1733598) | more than 4 years ago | (#31330788)

As Bruce Schneier has said, trying to bolt on security to an existing product or application can be very difficult and time consuming. Sometimes you even have to redesign things. Designing for security and using secure coding practices from the beginning, however, makes it much, much easier.

Re:Bolting On (1)

Jurily (900488) | more than 4 years ago | (#31330836)

And nearly 90 percent of internally developed applications contained vulnerabilities in the SANS Top 25 and OWASP Top 10 lists of most common programming errors and flaws in the first round of tests, Oberg says.

It doesn't matter how much you redesign things, if you fuck up the routine stuff.

Re:Bolting On (1)

Chris Lawrence (1733598) | more than 4 years ago | (#31331034)

If the design is good, you can fix the bugs. If the design is fundamentally flawed, you
need to throw it out and start again. There is a difference.

Re:Bolting On (2, Interesting)

hardburn (141468) | more than 4 years ago | (#31331266)

When it comes to security, not necessarily. A good design of classes for the purposes of readability and maintainability does not necessarily mean it's easy to fix a security bug. These are often completing design choices.

The two biggest errors cited by TFA were cross-site scripting and SQL injection. Generally, XSS can be fixed if you had a decent design--just filter your inputs better. SQL injection even more so. In my experience, good languages have database libraries that make it easy to use placeholders in SQL statements (if you're using some idiot RDBMS that can't handle placeholders, the library can transparently handle placeholders for you in a secure way). If your design started off with a proper database abstraction layer, and you let an SQL injection attack slip through, it should be easy enough to fix.

However, the third one mentioned is cryptographic implementations. This is much, much harder to solve, and fixes will often result in breaking backwards compatibility. For instance, the RC4 algorithm is considered reasonably secure on its own, but it's also very fragile. If you later decide to use something else, moving your data away from it can have huge backwards compatibility issues; this was exactly the situation faced by WEP. It can still happen for other algorithms, even one's that are sturdier than RC4.

Making practically unbreakable algorithms was hard, but it's largely a solved problem. Using those algorithms in practice is much, much harder, and it's a problem that has to be re-solved with each new system.

Re:Bolting On (4, Interesting)

Bert64 (520050) | more than 4 years ago | (#31332370)

For another encryption example, look at how windows and linux implement user password hashing...

Linux takes the plaintext password via an input channel (ssh, telnet, gdm, local console etc), passes it to PAM which loads the corresponding password from the shadow file, encrypts the user input with the same algorithm and salt, and compares the output. The backend (pam, encryption cipher) can be changed without affecting how the frontend, making it easy to use a different encryption algorithm as increases in computing power, or discovery of cryptographic flaws, renders the old ones insecure.

Windows, in a somewhat misguided attempt to prevent plain texts being sent over the network, effectively uses the encrypted hash (yes its more complicated than that, but the general idea is that only the hash ever gets used and the password isnt sent in the clear - unix solves this at a different layer by using encryption of the plaintext password such as ssh)... Because of this, the hashing algorithm is difficult to change. Older windows used lanman which is laughably weak, while modern windows uses ntlm by default which is somewhat stronger but not great... However, modern windows still has lanman support for compatibility reasons, and until vista/2008 it was still enabled by default. If they change the hashing algorithm, then they will still have to retain the old ones for quite some time in order to have compatibility, and also change the protocols to handle a third possible algorithm.
The fact that you can use the hash without cracking it first is also a design flaw, this isn't possible on unix or anything else i'm aware of.

Re:Bolting On (0, Redundant)

sopssa (1498795) | more than 4 years ago | (#31330874)

That's probably easy if it's just one guy, but what about when it's several, if not even hundreds of developers? Random patch code in OSS bug-tracking systems can make some other unrelated code insecure because the guy who submitted the patch didn't know everything about the code or didn't check it through and it slipped past the maintainers too. This is especially true in projects with really large codebase or several code branches and forks.

Re:Bolting On (1)

Chris Lawrence (1733598) | more than 4 years ago | (#31331000)

Sure, bugs can always be introduced, and some of these will open security holes. But as long as the fundamental design conforms to a sensible security model, this isn't a big deal. That type of bug can be found through additional code review. (Note that testing is *not* a method to find security bugs.)

Re:Bolting On (1)

Jurily (900488) | more than 4 years ago | (#31331090)

That type of bug can be found through additional code review.

Every bug can be found through enough additional code review. The problem is, it's slow and expensive.

Re:Bolting On (1)

Chris Lawrence (1733598) | more than 4 years ago | (#31331136)

So, what's your point? Bugs are hard to find. Bugs can be fixed, a broken security model cannot.

Re:Bolting On (1)

ClosedSource (238333) | more than 4 years ago | (#31331652)

It's software, there's nothing that can't be fixed.

Re:Bolting On (1)

Bert64 (520050) | more than 4 years ago | (#31332430)

Bugs can be fixed without impacting operation/compatibility...
A design flaw cannot.

Re:Bolting On (1)

ClosedSource (238333) | more than 4 years ago | (#31332870)

No. Both design flaws and bugs can sometimes be fixed without impacting operation/compatibility and sometimes they can't.

But we weren't talking about the consequences of a fix, just whether it can be done.

Re:Bolting On (1)

Lumpy (12016) | more than 4 years ago | (#31331344)

99% of it does not. Internally designed stuff in a company is usually the biggest messes there are. They were started by some mananger that was handy in VB 6 years ago and then perpetuated. nobody is ever given time to take a year and rewrite it, so everything is not even bolted on, it's slapped on with duct-tape.

to fix this we need to force managers to understand that software is not easy nor fast. Some say it takes education, I say it needs kneecaps broken.

But then I'm a humanitarian.

Re:Bolting On (3, Insightful)

Anonymous Coward | more than 4 years ago | (#31331228)

Designing for security and using secure coding practices from the beginning, however, makes it much, much easier.

Sure it does... but that sort of design takes money and expertise. More often software is dreamed up and planned in ad hoc meetings. For example, a person in marketing decides it would be a great idea if their customers can get updates on their phones and Nitwitter accounts. In a 4PM meeting the marketer proposes it to his boss as a necessary value-add function without which the competition would eat us alive (1).

The next day, a "planning" meeting is called. The marketing manager tells (note, I say "tells" not "asks for input") the programming manager that the company needs mobile updates. The company needs (note, it's changed from the "Marketer wants" to "company needs") it before the next peak retail opportunity. This opportunity is either Valentine's Day or Easter or Summer Break or Thanksgiving or some other arbitrary retail holiday.

The programming manager tells his programmer, "We need it by end of week."

The programmer begins to think about the problem. He raises objections to the timeline and lack of design. The marketing manager cries to the CEO. The CEO screams at the CTO. The CTO screams at the programming manager. The manager tells the programmer that he's wasted a day and we still need it by end of week.

The programmer thinks about coding and how to grab the data he needs. He browses a database and finds a table that he needs. To make it accessible to the web frontend, he opens up some permissions. Maybe he creates a new view that combines multiple tables to make his code easier or faster. This new view now violates PCI and SOX regulations, but he doesn't care.. this is just for testing until he can figure out how to do it properly. He stays up all night and gets a proof of concept working. The next day he shows it to his manager.

His manager says, "OK, tell them it's done."

The test software becomes production.

Open source doesn't necessarily mean dangerous (2, Insightful)

Pojut (1027544) | more than 4 years ago | (#31330802)

I know of at least one rather large and well-known company that doesn't use OSS because of "security", yet voluntarily continues to use IE6.

That sort of thing really pisses me off.

Re:Open source doesn't necessarily mean dangerous (3, Informative)

Opportunist (166417) | more than 4 years ago | (#31330982)

Quite the opposite. OSS is often far more secure than its "commercial" counterpart, for the obvious reasons.

1) No deadline. OSS is usually "done when it's done". Most OSS software I know is in perpetual beta, never reaching what its maker would call a "release state", but offers at least the same level of security and stability (if not better) as its commercial counterpart. Simply because there is no date we have to push something out the door, secure or not, ready or not, we have to make it for christmas (or for the new Windows version).

2) No need to "sell" the software. You needn't dumb down and strip security so potential customers accept the level of burden security adds to the fold. Security is never free. It always comes at the price of overhead. When you have two software tools available, customers will pick the one that is more "accessible". Which usually also is the less secure one. Because security often adds layers of additional overhead (either to you, the user, slowing you down and requiring you to enter passwords or access things in a certain way, maybe even with additional tools instead of from "inside" the tool you're mainly using, or to the system, meaning your software will run slower).

3) Peer review. Your code can easily be reviewed by thousands of "hackers" trying to find an easy way into your system, instead of having to poke at decompiled code. If you can read the source, far more people are able to poke and prod at it, resulting in more secure software instead of less, because security holes get found faster and, in turn, fixed faster. By the time you start using the product, a few months after its release, you may rest assured that all the easy to find security holes have been found by hobbyists. With CSS you often need experienced ASM cracks to dig those holes up, resulting in fewer people able to look at those holes and thus a slower patching cycle.

Re:Open source doesn't necessarily mean dangerous (1)

El Lobo (994537) | more than 4 years ago | (#31331246)

This is only one side of the picture.

a) While all you say is more or less true, that applies to big well known OS projects only. Obscure little one/two man projects don't have that big of a peer review. If those projects have few users, you can live with critical security holes for years without then even being known.

b) Extra large OS projects have the chaos factor against them. When a security hole is patched on some Linuz distro: where does this apply? On Ubuntu? Is this present on Kubuntu as well? What about Redhat? What about my own forked distro that I distribute?

Re:Open source doesn't necessarily mean dangerous (1)

Ltap (1572175) | more than 4 years ago | (#31331568)

Most distros leave the kernel alone, it's Redhat that does a lot of stuff that is ported upstream.

Re:Open source doesn't necessarily mean dangerous (2, Insightful)

PeterKraus (1244558) | more than 4 years ago | (#31331720)

> What about my own forked distro that I distribute?

If you don't know the answer to this, maybe you shouldn't be distributing the 'distro' at all.

Re:Open source doesn't necessarily mean dangerous (0)

Anonymous Coward | more than 4 years ago | (#31331282)

That sort of thing really pisses me off.

Well then grow a pair and out them.... let us all get on the rage train!

Re:Open source doesn't necessarily mean dangerous (1)

Lumpy (12016) | more than 4 years ago | (#31331390)

It's typically because whoever is in charge is incredibly under-educated. Probably their CTO or CIO really knows nothing at all, and then filled the ranks below him with yes-men that knows as little as he does.

At the bottom you have the guys wanting to get things done and secure, they pound their heads against the wall.

Re:Open source doesn't necessarily mean dangerous (1)

clone53421 (1310749) | more than 4 years ago | (#31332020)

And let me guess... their IT department would claim that open-source software is too difficult to test and administer patches remotely and keep updated?

Undefined requirements (1)

ClosedSource (238333) | more than 4 years ago | (#31330804)

There is no requirements document for security that you can follow and guarantee that your application is secure. You're really trying to anticipate all the ideas other people may have about compromising your code. In general, this is impossible to achieve, so you do the best you can.

Re:Undefined requirements (1)

Opportunist (166417) | more than 4 years ago | (#31331082)

There is no document because such a document would be outdated the moment you wrote it.

I write security guides and tips for a local computer magazine, based on developments in the malware industry. It happens, rarely but it does, that I have to retract my articles at last minute and rewrite them because what I wrote simply is not true anymore. What I wrote a year ago is certainly no longer true. Tips I gave 6 months ago are not offering security anymore. And what I wrote 3 months ago might still hold a hint of water, but it's anything but certain.

A few years ago I could sensibly give people the advice to check their system periodically with autoruns and check for unknown jobs starting. In came code injected in the "alignment holes" of system files, rendering that advice pointless. I recommended putting routers in front of their home network, now routers are target for malware, not only rendering this recommendation pointless but potentially I'd be telling them to deliberately offer an attack vector.

Such "standard papers" take months, sometimes over a year, to reach maturity. By the time they are ready, they are at best useless. At worst dangerous.

Re:Undefined requirements (1)

Lumpy (12016) | more than 4 years ago | (#31331414)

There is no document because such a document would be outdated the moment you wrote it.

That's why you put it on a Wiki!

Re:Undefined requirements (1)

Opportunist (166417) | more than 4 years ago | (#31332620)

You want to base your security guidelines on a wiki? And be held responsible for its implementation? Are you nuts?

Re:Undefined requirements (1)

ClosedSource (238333) | more than 4 years ago | (#31331490)

"There is no document because such a document would be outdated the moment you wrote it."

I agree. My point was that we shouldn't be surprised that many applications are not secure because it's an open-ended problem.

Re:Undefined requirements (1)

starfishsystems (834319) | more than 4 years ago | (#31332240)

It happens, rarely but it does, that I have to retract my articles at last minute and rewrite them because what I wrote simply is not true anymore.

Then you're approaching security as if it were a technology. It's not; it's a science. If you write instead about the application of security principles, you won't find yourself having to retract anything.

Sure, a particular use case might become less relevant over time, but it can't become wrong unless you misunderstood the underlying principle to begin with. The principle remains, and talking about it constitutes the real teaching opportunity.

The other half (2, Funny)

maxume (22995) | more than 4 years ago | (#31330820)

And the other half isn't even tested.

Re:The other half (1)

M8e (1008767) | more than 4 years ago | (#31330954)

The other half fails the 0th test.

Re:The other half (3, Funny)

Opportunist (166417) | more than 4 years ago | (#31331106)

Nah, the other half crashed when pitted against the security test suite.

Well now (4, Informative)

Monkeedude1212 (1560403) | more than 4 years ago | (#31330842)

That's extrapolating a bit much, isn't it? And scanning through the article, they don't even name the sample size, just percentages.

And yes, they mention that its only the stuff that they test, "so imagine what the rest is like". Well - thats it though, if someone is professionally developing with security in mind, they probably know how to test it in office or know somebody who can. Thus - no need to pay this corporation to test something you can do yourself.
If you are developing with security in mind - but aren't sure exactly what you're looking to protect against - THATS when you go to companies like these.

This is a pretty much skewed data source (probably a slashvertisement for them, too), and is the only study of its type. Take it with a weeks worth of salt.

Re:Well now (1)

jsebrech (525647) | more than 4 years ago | (#31331420)

Well - thats it though, if someone is professionally developing with security in mind, they probably know how to test it in office or know somebody who can.

Independent security validation is the only way to verify that your approach to security works in practice.

Re:Well now (1)

eLore (79935) | more than 4 years ago | (#31331558)

For the most part I agree with you. The caveat is that in certain circumstances, having an external party review your widgets is necessary from a regulatory compliance perspective. Also, Marcus Ranum is famous for ranting on "bad management" which requires you to pay an outside consultant to tell you the same thing that your internal resources were telling you, but for more money. Unfortunately, I've seen more than one organization suffer from this.

Re:Well now (1)

julesh (229690) | more than 4 years ago | (#31332932)

That's extrapolating a bit much, isn't it? And scanning through the article, they don't even name the sample size, just percentages.

I was wondering about selection bias, and, yes, investigating the company that did the research they appear to specialise in analysing native code (e.g. C or C++ applications) running under Windows. My guess is that a lot of the more security-conscious developers have moved to other environments (interpreted or JIT-compiled code and/or Linux), so they're left analysing the dregs...

Security is no selling point (5, Interesting)

Opportunist (166417) | more than 4 years ago | (#31330864)

It just is not. Actually, quite the opposite: The better your security, the more your potential customer will be put off by it.

Users do not care about security until it is too late (i.e. until after they got infected), and only then they will bitch and rant and complain how insecure your piece of junk is. If you, otoh, take security serious and implement it sensibly, they will bitch and rant already at install because they hate the hoops to jump through and the obstacles to dodge to make your software "just work".

Security is the antagonist to comfort. By its very definition. No matter where you look, security always means "additional work". Either to the user, which means overhead to his work, or to the program, which means it will invariably be slower than its competing products.

Thus security is not only an "unnecessary evil" when selling your product. It is actually hurting you when you try to convince someone to buy your stuff. Your software will be slower due to its security "burden", and it will be less comfortable to the user. The user does not see the glaring security holes when he buys the product. Only after, when the product bites him in the ass because it opened him up to an attack. But by then, he will already have paid for your product. And he will have bought your product instead of the more secure product your competitor offered, because yours was faster and easier to use.

Re:Security is no selling point (3, Insightful)

characterZer0 (138196) | more than 4 years ago | (#31331168)

Protecting against SQL injection attacks, XSS, buffer overflows, and validating user input does not put off users.

Re:Security is no selling point (1)

Cro Magnon (467622) | more than 4 years ago | (#31331234)

Yes it does. It makes your product later than the fast & slopper competition.

Re:Security is no selling point (3, Informative)

ka9dgx (72702) | more than 4 years ago | (#31331326)

Actually, good security would be a GREAT selling point, if someone actually implemented it.

Security is the ability to run code without unwanted side effects. Windows, Mac, Linux do not offer a simple way to do this. The closest you can get is either Sandboxie on Windows, AppArmor on Linux, or setting up a VM per program.

If you offered a way to specify the limits of side effects on an application before and while it runs, you could make a ton of people very happy. I suspect there is some money to be made there as well.

Re:Security is no selling point (0)

Anonymous Coward | more than 4 years ago | (#31331362)

Why is there no discussion of the fundamental trade offs inherent in all forms of engineering not only software. That is security is only another facet of software performance that includes features, reliability, cost, flexibility, ease of use etc. All software in fact all human defenses are insecure in the sense that a determined attacker can overcome them. There is a wide spectrum of users who can trade off perceived security risks versus benefits. No one choice is better than another. I leave my 89 Plymouth unlocked so what.

Re:Security is no selling point (1)

ClosedSource (238333) | more than 4 years ago | (#31331736)

I think there is a political issue which disproportionally elevates the importance of security - it's a talking point against Windows.

Re:Security is no selling point (1)

digitalhermit (113459) | more than 4 years ago | (#31331378)

It's not an either/or thing. Secure software is often the *easiest* to configure. It's when configuration is difficult and prone to error that people make mistakes or start using default configurations.

For example, when a service is installed on a system many installers do not have procedures for configuring the firewall. It may be a range of ports that's needed, or some access to a particular IP address. So people install the software and it doesn't work. They read something on the Internet that it's a firewall issue. So what do most people do? They turn off the firewall. I know at least three people who did this because they couldn't get NTP updates to work on their systems.

Re:Security is no selling point (1)

jsebrech (525647) | more than 4 years ago | (#31331494)

It depends on the product, but there are indeed corporate customers who have policies disallowing them from purchasing / deploying software that does not pass independent security audit.

It's a mixed bag, and it depends on the market you're in. For some types of software, security is a non-issue. Security is like usability. You can always improve things, but at some point you have to say "up to here, and no further".

Re:Security is no selling point (1)

clone53421 (1310749) | more than 4 years ago | (#31332082)

The better your security, the more your potential customer will be put off by it.

If, by “better”, you mean more intrusive, controlling, cumbersome, slow, and restrictive... then yes. Of course they will be.

But if, by “better”, you mean less intrusive, controlling, cumbersome, slow and restrictive...

Slashvertisement (1)

wintercolby (1117427) | more than 4 years ago | (#31330868)

Veracode offers the service of finding security flaws in your source. By definition organizations and developers that submit their source to them are going to have more secure software (according to Veracode) when it's released, after it's been certified.

All this shows is that there are developers using a company that specializes in finding security bugs to . . . find security bugs. It's just like using any other debugging tool, you rarely get a clean compile with no bugs on the first try.

Security firm says security is an issue (4, Insightful)

SlappyBastard (961143) | more than 4 years ago | (#31330880)

Hmmm . . . there's a word for that . . . XKCD, can you help me?

http://www.xkcd.com/703/ [xkcd.com]

Re:Security firm says security is an issue (1)

lysdexia (897) | more than 4 years ago | (#31331212)

And here I thought that meant having sex with Tauntauns.

Re:Security firm says security is an issue (0)

Anonymous Coward | more than 4 years ago | (#31331626)

Tauntauphilia.

Re:Security firm says security is an issue (1)

SlappyBastard (961143) | more than 4 years ago | (#31332982)

The word for that is "fanboy".

What about commercial open source software (4, Informative)

weeble (50918) | more than 4 years ago | (#31330914)

So lots of comparisons between open source and commercial software; however there is a lot of open source software that is sold, i.e. commercial. In addition it has been shown that most of the code for the Linux kernel was developed by people who were paid to do it by Red Hat, IBM, Intel and others. Does that mean that the Linux Kernel is commercial software.

May be the article should refer to closed source proprietary and open source software.

The article reads as if the author does not fully understand the how Open Source software is developed and is just a large advert (a.k.a. press release) for the auditing software.

Re:What about commercial open source software (1)

ClosedSource (238333) | more than 4 years ago | (#31331806)

The comparison should really be application to application regardless of the open/closed commercial/non-commercial categories. There's no inherent relationship between these categories and security.

80-20 (1)

gmuslera (3436) | more than 4 years ago | (#31330956)

That is 50-50 is good news if the sample was broad enough . Could be interesting to match that numbers with amount of users... could be a lot of those programs that their userbase coincide (or is even lower) with the amount of developers, and see how insecure are programs with more than 100,1000 or even more users (i.e. if the top 20 % of top safe applications have the 80 % or more of users,or the distribution is better than that).

They get paid to find security holes (2, Interesting)

dcraid (1021423) | more than 4 years ago | (#31331084)

Will a security firm ever certify that a solution is perfect on the first pass? Not if they want to be invited back for a second.

Code has bugs... so don't trust it. (1)

ka9dgx (72702) | more than 4 years ago | (#31331230)

Code has bugs, it always will. You need to reduce the attack surface, why not reduce it all the way down to the kernel of the OS? If you don't need to trust any of the users programs with the security of the whole system, you've solved a lot of problems.

Don't trust the users? Not a good idea. The users are the administrators these days.

Don't trust the internet? Well... it's a communications medium, just a set of tubes.

Don't trust the programs? Great idea!

Re:Code has bugs... so don't trust it. (1)

hellraizer (1689320) | more than 4 years ago | (#31331436)

"Don't trust the users? Not a good idea. The users are the administrators these days." - Bad Idea , users WILL mess up the system no matter how, imho we should NOT trust them "Don't trust the internet? Well... it's a communications medium, just a set of tubes." you have a point here, but isnt Clowd Computing all about "trusting" the web ?

Re:Code has bugs... so don't trust it. (2, Insightful)

ka9dgx (72702) | more than 4 years ago | (#31331642)

The reason users mess things up is that they have bad tools. There is no simple way to run something in a sandbox.

sometimes security doesn't matter (1)

i.r.id10t (595143) | more than 4 years ago | (#31331252)

Sometimes security doesn't matter, esp. with regard to the "internal project" stuff mentioned.

Of course, this is the area that basic utility scripting is used, you and perhaps one or two others are the only ones using it, you already have access to any other system you could get via a cross scripting technique, access to any DBs you'd get with a SQL injection, etc.

In other news... (3, Insightful)

GuruBuckaroo (833982) | more than 4 years ago | (#31331340)

More than 90% of all software tested fails to compile the first time. Seriously, that's what security testing is for - finding holes so they can be filled.

Not a shocker (2, Interesting)

ErichTheRed (39327) | more than 4 years ago | (#31331424)

Coming from the systems integration side of things, I don't view this as a surprise. Developers are great at writing software, but in my experience they have no idea about how the platform they're deploying it on actually works beyond the API function calls they make. This leads to internal applications that I have to throw back because part of the requirements are, "User must be a member of the Administrators or Power Users group." Most dev guys just don't get that it's very dangerous to give the end user full rights to an Internet-connected Windows box. There's just too many holes in Windows to safely allow it.

To be fair, there are a lot of reasons for stuff like this...not the least of which is deadlines for deploying "something that works." I've been there on the systems side too...scrambling at the last second to get hardware and operating systems deployed because of a deployment date. There are also a lot of apps coded in C++ and other unmanaged languages that open the system up for all sorts of buffer overrun attacks. Not much you can do there except vigilant code checking.

I think a little education on both sides of the fence would be useful. Developers should get some kind of training in "systems administration and internals for developers" and systems guys should definitely be educated in what holes are safe to open up on their systems. (That's a big cause of this too -- there's a lot of low-skilled systems admins out there who take the developer's instructions at face value without checking to see if full access is really needed.)

Re:Not a shocker (1)

ka9dgx (72702) | more than 4 years ago | (#31331716)

Why force the developers to worry so much about security? Why not instead provide a way to have a contract with the OS, which limits side effects to a known set of limitations? That would save a lot of grief, and let the developers get on with it.

Re:Not a shocker (0)

Anonymous Coward | more than 4 years ago | (#31332052)

Well, someone has to develop such a contractual system, and one would hope they care about security.

Stop being lazy and learn to program well, not just VB that works most of the time.

"remediate"? (2, Insightful)

Voline (207517) | more than 4 years ago | (#31331450)

Try "remedy", or does that not sound pseudo-technical enough?

Obsolete? (2, Informative)

vlm (69642) | more than 4 years ago | (#31331472)

The conventional wisdom is that open source is risky.

Does anyone believe that anymore, other than journalists quoting other journalists and PR people?

I did some google searching, trying to find when that old FUD campaign started. It seems to not show up much until 1998.

The 12 year old advertising/FUD campaign is getting kind of tired.

As misleading as 'Show all warnings' (1)

yalap (1443551) | more than 4 years ago | (#31331504)

A customer just run their $10k scanner on our software and only found 3 problems. But it turned out the vendor had grabbed every 'security vulnerability' ever reported on any discussion board/mailing list and listed it as a problem. e.g. 'I tried this URL and my computer slowed down a bit. I think it is a denial of service attack.' Took a few days to research and disprove their claims. Meanwhile, how many other such claims is it making? To me, it is analogous to switching on 'Show all warnings' - I've worked with managers and developers that want to eliminate all the warnings in the source. Apart from just rock polishing, I think it distracts them from focusing on the real issues like security and performance. Like any job, you need to have the right tools and know how to use them. We do use Java Findbugs and network scanners. But poor use of any tools only leads to cuts, bruises and visits to the emergency room.

Re:As misleading as 'Show all warnings' (3, Insightful)

clone53421 (1310749) | more than 4 years ago | (#31332360)

I've worked with managers and developers that want to eliminate all the warnings in the source.

There’s a good reason for that, and it’s not as petty as you think.

Warnings exist to let the programmer know that the actual behaviour might not be what the programmer thought was most intuitive. If it’s implicitly casting a float into an int, you damn well better know what that means and what effect it’s going to have on your code... it means that 1/2 == 0, for starters. Similarly, there’s absolutely no reason why you can’t use (count = 5) as an expression, except that its value is always 5, not true or false as you may have incorrectly thought.

Lazy, sloppy, or inexperienced programmers are going to fall for these sort of pitfalls. Experienced, careful ones won’t nearly as often. But if you force a lazy, sloppy, and inexperienced programmer to learn why the compiler is giving a warning and eliminate it, he’s going to end up slightly less lazy, more careful, and with a little more experience than he started with, because he’ll hopefully understand the warnings by then and know what the code is actually doing.

60% !!! (1)

NicknamesAreStupid (1040118) | more than 4 years ago | (#31331802)

Obviously, Veracode's tests aren't thorough enough. But it raises the question, "who tests the testing software?"

Encouraging? (1)

clone53421 (1310749) | more than 4 years ago | (#31331986)

The conventional wisdom is that open source is risky. But open source was no worse than commercial software upon first submission. That's encouraging

...um, I’d have started with the opposite premise, that open-source is safer. In light of that premise, I think their findings are somewhat discouraging... except,

It took about 30 days to remediate open-source software, and much longer for commercial and internal projects

Now that’s encouraging.

security is important (0)

Anonymous Coward | more than 4 years ago | (#31332172)

mostly to security consultants. nobody else really cares because it just doesn't matter that much.

Firefox has too many developers (0)

Anonymous Coward | more than 4 years ago | (#31332274)

This obviously causes security holes.

In its last several releases, everyone's favorite Open Source browser has become an unstable mess of add-ons, plugins, and other hacks that chew up memory like a fat kid with a chocolate-dipped corn dog. In fact, just last week, SecurityFocus released news of a devastating exploit in Firefox 3.5.5 that they blame squarely on its unstable architecture.

From its infancy Firefox has been the product of collaborative effort, unifying code from hackers worldwide. But thanks to the Hayes Law, we see that there is a "sweet spot" to such a development style, and that Firefox has long since left it behind. In the chart below, we can see that the number of Firefox developers has increased exponentially since 2002, and that number will more than double in 2010.

But it's time to be honest: either Firefox, as a modern web browser, will have killer performance on 64-bit, multicore Intel chips or it's not worth downloading and installing. And since, as we have seen in the recent past, that Firefox is actually getting slower with each release, Firefox is certainly a waste of time for anyone who takes their web browsing seriously.

The Hayes Law states that, given a specific type of software project, there is a certain complexity associated with it, and with that complexity an optimal number of developers. It's actually a little more complicated than that, taking into account development model, coding platform, programming language, and code repository platform, but in the end it's easy to plug in the numbers and see where a project's headed.

Against the Hayes Law, Firefox appears to have jumped the shark sometime after the Firefox 2.0 in 2006. The next major release, Firefox 3.0 in 2008, introduced many issues users today complain about: bloat, sloth, instability, and insatiable hunger for memory. Firefox user complaints increased in tandem, all syncing up with the jump in developers. Ergo Firefox's problem: too many cocks in the kitchen.

To further underline this growing problem, Firefox completely falls down in Acid3: Firefox 3.5 scores 93/100, and Firefox 3.6 scores only 87/100. Needless to say, Firefox 4.0 mockups score 0/100. Sadly, this is a continuation of a trend: Firefox took the longest of all browsers to beat Acid2. And don't even think about Acid4. Firefox is collapsing under its own weight.

The core of this problem looms: the number of developers, as seen in the chart above, will only continue to skyrocket for Firefox 3.6 and beyond. By the time Firefox 4.0 is released, sometime in December 2010, the number of developers will be nearly 4,000, almost a full magnitude greater than the optimal 445 or so in 2006. Clearly, Firefox is about to capsize.

So what is to be done? Users can petition the Mozilla Corporation and the Mozilla Foundation to rethink their development model, focus on optimization instead of new features, and perhaps backpedaling on some of the less sensible projects like Mozilla Mobile and the non-standard XUL interface. Concerned individuals should log into Mozilla's Bugzilla and let loose with their bug and crash reports like never before.

Unless Brendan Eich and Mitchell Baker take their heads out of their asses, however, the best course of action is to escape Firefox like rats from a sinking ship. There are other options out there: Apple's small, fast, and efficient Safari, coded by several dozen professional programmers, is currently the best browser for Mac and Windows. The time-honored Internet Explorer continues to embrace and extend Web standards. Other browsers like Chrome, Opera, and Lynx are out there too but aren't for everyone.

In the end, Mozilla Firefox as it stands is a sick browser that is in need of emergency surgery not ready to take on the challenges of Web 2.0 and things like CSS 3, HTML5, and JavaScript 1.9. Unless something happens soon, Firefox will take the entire World Wide Web—and everyone who depends on it—back to the Stone Age of the Internet.

Re:Firefox has too many developers (1)

clone53421 (1310749) | more than 4 years ago | (#31332416)

Three point five point FIVE?

We’re up to version 3.5.8. Please do try to keep up.

Re:Firefox has too many developers (1)

clone53421 (1310749) | more than 4 years ago | (#31332490)

Scratch that... 3.6 has been out for a month now. I rarely restart this computer, so it hadn’t told me to update yet.

Re:Firefox has too many developers (0)

Anonymous Coward | more than 4 years ago | (#31332580)

too many cocks in the kitchen

Are you suggesting Firefox needs more women developers?

I am a professional softwaredeveloper myself..... (1)

Tanuki64 (989726) | more than 4 years ago | (#31332642)

...and I don't give a **** for security. I am working as freelancer. As such there a two possibilities: I calculate correctly including all costs for proper design and tests, or I get the contract. Customers pay ****, customers want ****, customers get ****.

Security (1)

QuoteMstr (55051) | more than 4 years ago | (#31332706)

Back when I was in charge of hiring new programmers for a web development shop, the very first thing I'd do when I got a resume would be to load up the applicant's personal website, if he had one.

No, I didn't look at the aesthetics of the site. I didn't care about the cleanliness of the HTML. The implementation language and web framework didn't matter. I had more important things on my mind: I would find a form, and type hello world' -- ; SHOW TABLES. If the site misbehaved, I'd toss the resume in the trash and adamantly refuse to reconsider it.

Management thought I was nuts --- these were guys with degrees! They came with great recommendations! And they're cheap! What does one bug matter? But with SQL injection being the now #2 security vulnerability [sans.org] , who's whining now?

Attention to correctness is the bedrock trait of a good developer. Everything else comes second; security is just one property of correct code.

uhhh (1)

buddyglass (925859) | more than 4 years ago | (#31332946)

"Conventional wisdom" depends on who you ask. The convention wisdom I've heard is that OSS is actually more secure. More eyes, etc. The flip side of his analysis is that while OSS was no more vulnerable than closed source it was also no less vulnerable, which would suggest the closed source model is equally capable of producing secure code.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?