Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Michal Zalewski On Security's Broken Promises

Soulskill posted more than 3 years ago | from the always-get-it-in-writing dept.

Security 125

Lipton-Arena writes "In a thought-provoking guest editorial on ZDNet, Google security guru Michal Zalewski laments the IT security industry's broken promises and argues that little has been done over the years to improve the situation. From the article: 'We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else's code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"

cancel ×

125 comments

Sorry! There are no comments related to the filter you selected.

It's true. (2, Informative)

Securityemo (1407943) | more than 3 years ago | (#32297354)

Computer security will kill itself.

No, du-uh. (0)

Anonymous Coward | more than 3 years ago | (#32297946)

When you have an entire industry focused on patents, copyrights, regulation and litigation, where is the money to develop tighter security going to come from?

Re:No, du-uh. (0)

Anonymous Coward | more than 3 years ago | (#32299570)

The problem is Microsoft has consistently failed us all. This catastrophe must be squarely laid at their feet. Windows powers over 90 percent of the world's desktops therefore the solution must start there. On the desktop. MS has had decades to rectify this untenable state of affairs. They have so far proven themselves grossly incompetent. The only solution and the one I discovered is to quite simply switch [apple.com] to an alternative. [ubuntu.com] Any [freebsd.org] alternative. [chromium.org] Otherwise, Windows monoculture will be the downfall of us all.

Re:No, du-uh. (1)

FormOfActionBanana (966779) | more than 3 years ago | (#32300296)

That's crazy talk. Microsoft has led the way in principles of application security, secure web frameworks.... they've not exactly blazed the trail with managed language runtimes and secure-by-default, but it's an understatement to say they have certainly caught up.

Re:No, du-uh. (0)

Anonymous Coward | more than 3 years ago | (#32300380)

I'll be sure to tell my customers that bring their virus-ridden PC's into my shop for the tri-mesterly cleansing and wallet emptying ritual that you said that. I'm sure they'll feel much better. In the meantime, I'll also be sure to mention it to the friends and family members that I successfully switched to one of the aforementioned alternatives so that they no longer have to come to me for my services too. I'm sure they'll have a good laugh.

A lot of atroturfing^H^H^H^H lipservice has been paid to MS' supposed security turn-around of late. Yet, Windows PC's are as virus, spyware, and malware laden as ever. When Windows has as few incidences of security breaches as the average Mac or Ubuntu box, I might then start to believe the bullshit coming out of Redmond and it's sychophants (read: people like you). Until then, don't waste your breath.

Re:No, du-uh. (1)

FormOfActionBanana (966779) | more than 3 years ago | (#32300500)

Oh yeah, I sort of did forget about the zombies... I work in application security and probably have kind of a narrow view.

Re:No, du-uh. (0)

Anonymous Coward | more than 3 years ago | (#32300648)

Microsoft has led the way in principles of application security, secure web frameworks...

You are flat out wrong and you obviously know nothing. There is not a single thing that Microsoft has done in these areas that wasn't done elsewhere first and better. You almost sound like one of those iPhone fanbois that think apple invented the smartphone. Let's see, ASLR: PAX then OpenBSD, DEP: pathetic implementation of nx-bit that was thought of sometime in the 80's, MAC: thought of long before MS even existed, sandboxing: talk to IBM, in the 60's. Managed secure runtimes, Sun did it better the first time with Java.

Basically, MS is a leader in nothing but bullshit and their security history is jokish and you look even more jokish for defending them.

Sarcasm becomes you. (1)

reiisi (1211052) | more than 4 years ago | (#32301650)

nt;

Re:It's true. (0)

Anonymous Coward | more than 3 years ago | (#32299078)

"...laments the IT security industry's broken promises and argues that little has been done over the years to improve the situation. From the article: 'We have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share...."

The bottom line is that IT security isn't anywhere near as important as the security oriented folks want it to be. Security is a bit like bandwidth... I'll pay for bandwidth up to a point, but at some point the tiny imperceptible improvement I get with the next megabit-per-second isn't worth the extra cost. Similarly, I'll pay for some security.... passwords on all my sites... maybe password rule conformance... limited single-sign-on capabilities in some cases... secured transport layers... but I won't pay (in most cases) for things like complex key-ring based security, enterprise (or industry) wide single signon projects, encrypted storage that isn't transparent... etc.

Why won't I pay for those things? It's not because they're not valuable. It's just that they also carry a cost that makes them less attractive (complex key-ring based security generally has high administrative costs, enterprise single signon initiatives always have one more thing they need to hook into and can't, encrypted storage tends to increase my development costs, etc.) So, for most businesses, the cost/benefit analysis tends to point to "wait... maybe it'll get better in 5 years and we won't have to worry about this issue..." After all, even things as basic as transport level security weren't that "basic" 10 or 15 years ago. So there's potentially some wisdom in waiting on security related efforts.

It'll Never Happen (1)

WrongSizeGlass (838941) | more than 3 years ago | (#32297372)

IT & PC security companies will never "fix" things or come up with a solid and secure foundation for computer security - because it would put them out of business.

Re:It'll Never Happen (4, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#32297590)

Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck? Somehow including all the "independent security researchers", which includes anybody with a computer, a clue, and some free software?

Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

You can buy a lot of low end sysadmins re-imaging infected machines for what it would cost to write a fully proven OS and application collection that matches people's expectations.

Re:It'll Never Happen (-1, Redundant)

WrongSizeGlass (838941) | more than 3 years ago | (#32297706)

Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck?

Of course not. And I'm not suggesting any type of "conspiracy" from individual companies or groups of them. It's just not in their best interest to "fix" the problem and it would be a poor business decision to do so. 'Adequate is good enough' is what I see from this industry.

The companies that have true motivation to solve this problem are the OS vendors. MS has come a long way with Windows 7, and Apple & Linux do a pretty good job of issuing patches and updates, but there's still a lot of work to be done all the way around. The 3rd party software included with OS X & Linux distributions are usually the ones that have the security holes, though they're getting better too.

Re:It'll Never Happen (1)

metacell (523607) | more than 3 years ago | (#32299582)

You miss the grandparent's point... if it was already possible to fix the problem, the first company who did so would kill off the competition and earn ****loads of money. All the profit from selling the solution would go to that compnay, so it would be very high, while the loss of shrinking the market would be evenly distributed between it and its competitors, so it would be comparatively low. Since competetitors have no reason to be loyal to eachother, the only way to prevent it would be if all the security companies participated in a cartel.

This is one of the many reasons that competitive markets are good, while monopolies and cartels are bad.

Re:It'll Never Happen (1)

Jurily (900488) | more than 3 years ago | (#32297840)

If there were some magic bullet

Eliminate users?

Re:It'll Never Happen (3, Funny)

maxwell demon (590494) | more than 3 years ago | (#32298054)

I think normal bullets are sufficient for that. Unless some of the users are wizards, of course.

Re:It'll Never Happen (0)

Anonymous Coward | more than 3 years ago | (#32298030)

Sigh. WrongSizeGlass said nothing of the sort, people. Why do strawman arguments keep getting modded up to 5, Insigntful?

Re:It'll Never Happen (1, Funny)

syousef (465911) | more than 3 years ago | (#32298554)

Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck?

They are called security conferences and 'best practice; documents

Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

They appear to have found the magic bullet. it is called "the principle of least privellege". Basically they take away your ability to do anything but log on. Then when you shout loudly enough that you can no longer do your job, they make you fill out so much paperwork that you'll never want to ask for access again. Finally when you have just enough access to do enough of your job that you don't get fired (ineffectively and poorly) they continue to make you justify the access you gained in endless meetings, emails, reports etc.
 

Re:It'll Never Happen (1)

Will.Woodhull (1038600) | more than 3 years ago | (#32299022)

With the resounding success of Win3.0, Microsoft demonstrated that you don't need to provide a secure computing platform if you market your product to customers who know nothing about the technology. Things have gone downhill from there.

Re:It'll Never Happen (1)

DdJ (10790) | more than 3 years ago | (#32299352)

Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

You hit the nail on it right there, with the "economically competitive" part. That's the problem.

Sure, if you've got a bunch of custom hardware running custom software that's thoroughly engineered and audited, and that never exchanges data with the rest of the world, you can have considerably higher security than someone using off-the-shelf parts and software on a commodity hardware platform connected to public networks. But it would be economic suicide. The value you give up in productivity is considerably higher than the value you gain in increased security.

This is why I'm glad to see experimentation in this area. What the game console vendors are doing with their platforms (eg. "XNA"), what Apple is doing with the iPhone, what AT&T is doing with the Android-based-and-yet-locked-down Backflip... these experiments in "curated computing" might point in the direction of economically viable secure computing.

Maybe. Whether it works out or not, we'll learn something by having attempted it.

Dedicated machines, tying the hardware down. (1)

reiisi (1211052) | more than 4 years ago | (#32301704)

I'm wondering how the license issues will fall out on locked-down Android based devices, and that is part of the problem.

(Locked-down and tied-down are slightly different things.)

Re:It'll Never Happen (1)

Opportunist (166417) | more than 3 years ago | (#32300180)

Oh yeah, we also write our own malware, or else we'd go out of business. Didn't you get the memo?

I will disagree. (1)

khasim (1285) | more than 3 years ago | (#32300204)

Do you actually think that all IT and PC security companies have a giant cartel going, where they all secretly agree to suck? Somehow including all the "independent security researchers", which includes anybody with a computer, a clue, and some free software?

No. And no one is saying that.

Seriously? If there were some magic bullet, the temptation for one cartel member to make a giant pile of cash on it would be overwhelming.

You might want to look at this article.
http://www.ranum.com/security/computer_security/editorials/antivirus/index.html [ranum.com]

There is no SINGLE solution that is 100% EFFECTIVE for EVERY scenario.

But the current focus on black lists is ineffective. At least white lists would give SOME degree of protection.

Much more troublesome, for security, is the fact that there are no known methods of secure computing that are economically competitive with insecure ones, not to mention the issue of legacy systems.

Fuck legacy. Seriously. I'm tired of everyone trotting out "legacy" as if it were some natural law.

A 100% brand new system today will STILL be vulnerable to the same attacks that were directed at the previous version of that system. That is simply bad design.

You can buy a lot of low end sysadmins re-imaging infected machines for what it would cost to write a fully proven OS and application collection that matches people's expectations.

And why do you need that?

Why not just a series of steps getting from the current disaster to a state closer to "best practices"?

Because there will always be "malware" does NOT mean that the situation cannot be improved. Instead of millions of machines infected, how about we aim for an environment where only 100,000 machines are infected?

Re:It'll Never Happen (1)

Opportunist (166417) | more than 3 years ago | (#32300148)

I would gladly go out of business and do something useful. Maybe design a slick database. Or write a cool game. Instead I'm sitting here, improving the ability of my VM at detecting "pointless" loops in malware.

Allow me to tell you something: AV people tend to be amongst the best in the business. We know more about the Intel architecture than maybe most people at Intel. We know quirks of Windows that even people at MS don't know about (how I know? If they knew they wouldn't have put that crap in there!).

Do you REALLY think we couldn't find anything better to do with our time?

So let me get this straight (5, Insightful)

Monkeedude1212 (1560403) | more than 3 years ago | (#32297392)

When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

All security in general is reactive. You can't proactively solve every problem - this philosophy goes beyond security. The proactive solution is to plan on how to handle the situation when a vulnerability gets exploited, something I think virtual security has managed to handle a lot better than physical security.

Re:So let me get this straight (3, Informative)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#32297668)

Probably because, at least in theory, the rules of Virtual security are more favorable?

In the real world, security is hard because matter is malleable. When an armored vehicle gets blown up, we don't say that it "failed to validate its inputs". It just didn't have enough armor. Even in cases where it survives, all it would have taken is larger projectile, or one moving a bit faster... When somebody pulls an SQL injection or something, though, it is because the targeted program did something wrong, not because of the inescapable limitations of matter.

The only real class of security issues that mirror real-world attacks are DOS attacks and the like, because computational capacity, memory, and bandwidth are finite.

Re:So let me get this straight (1)

maxwell demon (590494) | more than 3 years ago | (#32297934)

In the real world, security is hard because matter is malleable. When an armored vehicle gets blown up, we don't say that it "failed to validate its inputs". It just didn't have enough armor. Even in cases where it survives, all it would have taken is larger projectile, or one moving a bit faster... When somebody pulls an SQL injection or something, though, it is because the targeted program did something wrong, not because of the inescapable limitations of matter.

Not all real life attacks are blowing something up. Ever heard of a locksmith? Ever heard about using social engineering to get into a building? Ever heard about brute-force searching a physical number lock? Or about breaking into a safe with the help of a stethoscope?

The only real class of security issues that mirror real-world attacks are DOS attacks and the like

You are behind the times. Today's attacks are Windows attacks. :-)

Re:So let me get this straight (1)

Monkeedude1212 (1560403) | more than 3 years ago | (#32297958)

But thats where virtual security is LESS favourable than physical security.

There was a time where SQL injection wasn't even a concieved idea - so how do you protect from that kind of threat?

With Physical security, the amount of things involved are very few. It basically boils down to keeping bullets out or keeping people out. And both of those get a bonus with the more armour you add or the more people you hire.

With virtual security, you can take a million computer scientists and tell them to get cracking but you can't guarantee they'll be prepared for the next big vulnerability.

Re:So let me get this straight (2, Insightful)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#32298208)

The difference is that programs are mathematical constructs and thus(if you are willing to take the time, and possibly constrain the set of programs it is possible for you to write) you can prove their behavior.

Re:So let me get this straight (1)

omuls are tasty (1321759) | more than 3 years ago | (#32299508)

In reply to this, I'd like to quote a rather famous computer scientist:

Beware of bugs in the above code; I have only proved it correct, not tried it.

correctness (1)

reiisi (1211052) | more than 4 years ago | (#32301718)

The most correct program in existence consists of exactly one instruction:

NO-OP

and it is unfortunately not correct in all contexts.

Re:So let me get this straight (1)

greed (112493) | more than 3 years ago | (#32298684)

Many attackable flaws--like SQL injection--are also bugs. That is, unsanitized data is put into something that's parsed for meaning.

(This is a long-known problem, at least in UNIX circles, as it is the SQL equivalent of command quoting problems.)

These bugs show up as crashes and odd behaviour with incorrect user input, or unanticipated user input. (Ask Apple how much it cost for an incorrectly quoted "rm" command in the iTunes update script.)

You test for this stuff by feeding your program the whole range of input characters and string lengths. You especially don't test expected inputs: you test unexpected inputs. And it's often difficult for the people who write a program to do these kinds of tests, because they all know "that input is wrong, you wouldn't do that". Like someone who never thinks to try "1/0" because you can't do that.

So you run a test with every character in the input code set. You test with spaces, variable expansion characters, quote marks, the lot.

On the input side, you whitelist. If you know your program is safe with upper- and lower-case letters, verify the input only contains those characters. Never look for characters you know are bad; that means you have to know everything, even the future.

This way, your program is hardened against typos as well as attacks. You can give the user meaningful guidance as to what went wrong: "User names can only be 8 characters consisting of upper- and lower-case letters and underscore", for example. Rather than, "Segmentation fault (core dumped)" or "/tmp/a: file not found; user: file not found" or any of the other weird things that can happen when input gets over-parsed.

Also, make sure you can deal with data damaged by I/O errors. Sure, you've got a lovely XML file. But the 7th block on disk got trashed when the file was being updated and the power went out, and now you get 512 ASCII NULs where that data used to be. What does your program do? Or the file pointers got trashed and you're actually getting the data that makes up /bin/ls instead of your config file... what happens?

Use the right APIs. On UNIX, we can pass arbitrary and dangerous arguments to subprocesses by using the 'exec' family of system calls, which do not invoke the shell (unless you invoke a shell script). So you can safely call, say, "rm" with "a funny name", because the shell won't be invoked and won't want to split the argument on spaces. Basically, never use "system" or any moral equivalent. (On Windows, you can't avoid this, because it doesn't work that way; anything you call with an argument vector will produce a space-separated string for the process to parse on its own. So be aware of that, and take great care in program invocation.)

Don't call an outside program for something your language and libraries can do in your process. Don't use system("rm file") when you can unlink("file"). Especially don't system("echo message"); I wish I was kidding about that one.

Don't use wildcards in program commands. What will happen if someone creates really funny filenames, like "\n", or ";", or "; rm -rf .;^H^H^H^H^H^H^H^H^Hinnocent file". If the host operating system allows every octet except NUL and / in a filename, can your program deal with it?

If you really must do wildcards, use something like 'find' which can be reliably controlled: a wildcard argument to find will never be treated as a shell command. (You're already bypassing the shell, right? Or quoting properly from a shell script?)

Learn the difference between "$*" and "$@". UNIX shell scripts can easily handle arbitrary weird names... but you have to be religious about quoting to do it. And learn about the POSIX "--" end-of-option marker, and the "./${filename}" trick, so you can call commands with arguments that begin with -.

All of this protects you from a variety of injection attacks. But it also protects against input errors, too; and as long as humans are on the console, there will be input errors.

Yes, but SQL injection was predictable (1)

Beryllium Sphere(tm) (193358) | more than 3 years ago | (#32298786)

That particular example is a bad one for the point you're making.

Things happen when you have control logic and peripherals.

By "peripherals" I mean anything the code can control. It could be a database, or a Space Shuttle main engine.

Dan Bernstein's theory, which he sharply distinguishes from least privilege, is to ruthlessly eliminate the code's control over anything not actually required. No matter how complex the code, it can't do anything that the computer can't. No compromise of my laptop could damage a shuttle main engine. Sandboxing is an attempt to implement this philosophy.

By "control logic", I mean anything that is an input that has results. A mouse is control logic. A radio button on a form is control logic that is simple enough to analyze. A web browser is control logic that is beyond definite analysis.

So a web form that builds a SQL statement from user input should have set off alarm bells on general principles, because it's allowing a malicious user to edit code in a complex programming language that has control over a database.

Re:So let me get this straight (1)

hairyfeet (841228) | more than 3 years ago | (#32298784)

Actually I was thinking it would be the opposite. IRL you can hire a couple of thugs with Uzis and most criminals aren't even gonna THINK about trying to force their way in, but with PCs as long as they are connected to the Internet there isn't really a "its your ass" kind of threat you can cook up, because they can use zombies and bounce all over the place, so the actual threat for a hacker is quite low.

Also IRL you can become allies with the local law enforcement, and of course you have the law on your side. With the Internet you are basically a foreigner because your attacker can sit safely in idon'tgiveafuckistan, complete with his paid off police cronies, and you can't do shit about him.

So I would give the advantage to physical security over virtual as long as any Internet access is required. Of course without the Internet it is a whole lot easier to lock down a machine, which is why the military has concepts like air gaps [wikipedia.org] for classified networks.

Re:So let me get this straight (0)

Anonymous Coward | more than 3 years ago | (#32297848)

When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

All security in general is reactive. You can't proactively solve every problem - this philosophy goes beyond security. The proactive solution is to plan on how to handle the situation when a vulnerability gets exploited, something I think virtual security has managed to handle a lot better than physical security.

The proactive solution is NOT just how to handle a breach or an incident. In fact you're oversimplying what it means to proacitvely plan. Correcting something is just one piece of puzzle, and security has a LOT of moving parts and pieces to the puzzle. What we're getting at here fundamentally (and yes I believe getting back to basics once in a blue moon is helpful) is prevention, detection and correction. There is usually too much emphasis on one of those when you look through the FUD and some of the more noteworthy breaches. In fact, it's surprising sometimes to see how much an organization will spend on prevention, with absolutely ZERO detection capabilities. If we're going to use the physical vs. logical parallel...it's like installing some bars in your windows, getting a stronger door and maybe even a guard dog. But you don't install an alarm or any sort of alerting capability if someone breaks through your door and gives your dog some Alpo to keep him quiet. Planning how to manage, how to assess how to prevent, how to detect, how to respond, how to react, etc, etc goes along way but it's still a daunting multi-year task in most organizations. And to pile on even more cliches, it does indeed start at the top. I hate to say it but federal mandates for all businesses (not just public, but private as well) to implement some sort of IT Governance could potentially be a solution. The volumes of data are growing, but the budgets and FTEs to protect it are not.

Re:So let me get this straight (2, Insightful)

melikamp (631205) | more than 3 years ago | (#32298792)

When Virtual Security mirrors Physical Security - people should expect more from virtual security? How is a Night watchmen not a form of "vulnerability management" and "attack detection"?

I agree about the physical security: with software, we are confronted with a very similar set of problems.

All security in general is reactive.

I am not sure what that means. If I have a safe, for example, as a solution to my policy of restricting access, then I have something that is both proactive and reactive. The safe is proactive because it make unauthorized access via a blowtorch much more expensive than authorized access via a combination. It is reactive because it makes undetectable unauthorized access prohibitively expensive. I don't see why software security is different.

I am not a professional security specialist, but, with all due respect, I think that I have a clearer understanding of security philosophy than the author of TFA. At times, he seems to be completely lost.

He spends a lot of time attacking strawmen. He analyzes some definitions, for example: "A system is secure if it behaves precisely in the manner intended - and does nothing more." I would not dignify this with a comment, because this is the definition of bug-free software, nothing else. "A system is secure if and only if it starts in a secure state and cannot enter an insecure state." Does this even mean anything, unless we define "secure state"? He is right about one thing: these are bad definitions. In fact, they are so bad, I can hardly see what they have to do with the software security.

The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing

He disses the reactive approach, even though it is one of the cornerstones of the physical security. A system that cannot be compromised surreptitiously is often a less attractive target than the one that can, making it more secure in practice. And why is sandboxing in this list? Correct me, but it is the poster child of proactive approach. If your hypervisor or interpreter or whatever sandbox you are using is bug-free and is effective at enforcing your security policy, then the entire process is completely secure.

Which brings me to my next point. I'll go ahead and try to give a reasonable definition of software security. The software is secure if it is effective at enforcing the given security policy. I don't have to say that it is bug-free: it's an underlying assumption, because if the software has a bug which allows for violation of your policy, then the software is not effective at enforcing it.

I am perplexed by the omission of the policy notion from TFA. How can we start talking about security if we did not define what we are trying to secure ourselves from? Let's take one very popular policy, say, restriction of access to data. Despite of all of complaints in TFA, the problem is largely solved. To be more specific, let us imagine that we have a policy as follows:

(1) Data has to reside on a networked host (otherwise the problem would be trivial).

(2) Data has to be available upon an authorized request over the network.

(3) Data has to be available upon an authorized local request.

(4) Data should not be even detectable by an unauthorized agent.

(5) The same networked host has to be able to service unrelated public requests (e.g., HTTP).

I am not a professional, but even I can probably slam together a system, over a weekend, to implement this policy. OpenBSD, one restricted account apache to serve public requests, another restricted account apache with SSL to serve the data, reasonable file permissions. Good luck compromising me without social engineering.

I guess what I am trying to say is, there is nothing wrong with our understanding of software security. The reason the field looks so bad is because people design overly complicated, contradictory, or outright brain-dead security policies.

But what if... (0)

Anonymous Coward | more than 3 years ago | (#32300226)

...you could proactively solve every security problem by exploiting some weakness that all of them have in common? And what if I told you that the same weakness is also responsible for the unreliability of software in general?

We should attack security issues the way pathologists attack contagious viral diseases. We must find something vital that is common to all the viruses and devise a vaccine that targets the common weak point.

FWIW, I believe that the weak point of malevolent code has to do with timing. So, we will not solve the security problem with our current software model. We will need a new model that incorporates timing at the fundamental level.

Time is the main missing ingredient in a Turing Machine. We need a new computing model.

timing? (1)

reiisi (1211052) | more than 4 years ago | (#32301734)

You mean the race condition between the marketing department's release schedule and the engineering department's bugzilla?

Reactive Security Not Necessary Bad (1)

Maarx (1794262) | more than 3 years ago | (#32297406)

Reactive security is not necessarily a bad thing. Only by challenging today's security can we seek to inspire people to improve security for tomorrow.

I do, however, feel that security in the digital age is laughable at best. It turns out telling a computer not to do what it's told is significantly harder than telling it to do what it's told.

Re:Reactive Security Not Necessary Bad (1)

ircmaxell (1117387) | more than 3 years ago | (#32297738)

Reactive security is not necessarily a bad thing.

It's indeed a very good thing. When coupled with Preemptive security. To give an example in a "traditional" realm, Preemptive security would be locking your doors when you leave the house (and setting an alarm/installing bars on windows, etc). Reactive security would be having the police come to your house with guns drawn because someone is inside (and then later figuring out how they got in and closing that hole). Neither on their own would be sufficient to feel secure (or protect anything worth protecting). But when you combine them, you have a very effective security system (that's used by the entire world).

If you ask a normal (non-geek) person if they would leave their house unlocked during the day in a bad neighborhood, what do you think they'd say? But if you ask them to not click links from reputable sources (or those that look suspicious), they look at you like you're crazy. THAT's the problem with preemptive security today. Not that it's hard or costly, but that the normal users are not convinced that they should care at all...

Re:Reactive Security Not Necessary Bad (1)

ircmaxell (1117387) | more than 3 years ago | (#32297982)

That should have been "not click links from non-reputable sources"... Dam lack of an edit button...

Re:Reactive Security Not Necessary Bad (1)

maxwell demon (590494) | more than 3 years ago | (#32297992)

If you ask a normal (non-geek) person if they would leave their house unlocked during the day in a bad neighborhood, what do you think they'd say? But if you ask them to not click links from reputable sources (or those that look suspicious), they look at you like you're crazy. THAT's the problem with preemptive security today. Not that it's hard or costly, but that the normal users are not convinced that they should care at all...

Not clicking suspicious links is not equivalent to not locking your door. It's equivalent to not trusting random strangers showing up at your door and asking you to let them in. Quite a lot people fail at that security measure also in real life.

Re:Reactive Security Not Necessary Bad (1)

Maarx (1794262) | more than 3 years ago | (#32298052)

I think the lack of preemptive security is self-balancing. Those who do not believe preemptive security is necessary are those who fall victim and promptly change their tune.

It is unfortunate that such victims traditionally pay an exorbitantly disproportional price for their misconception (such as identity theft victims), but again, I believe the system to be self-balancing. If the thousands and thousands of bone-chilling horror stories about identity theft aren't enough to get you to take it seriously, what more do you expect us to do?

Additionally, one must consider an additional angle. Much of what we would call "preemptive security" in the physical realm still falls under the blanket term of "reactive security" when translated to the digital realm. Allow me to provide an example:

In the physical realm, one looks at their house and says: "Now, if I was a thief, how would I get inside? I would go through the windows. I should install bars." This gets labeled as "preemptive security".

But how does this translate to the digital realm? Your average user, perhaps even, unfortunately, your average programmer, lacks the creativity and foresight to consider how programs might be exploited. Instead, you task other people with figuring out how to get inside, so that you can turn around and fix it. This is commonly referred to as "penetration testing". Now, is this "preemptive security" or is this "reactive security"? (Hint: That's not an easy answer)

Consider, finally, the notion of the "white hat" hacker that performs penetration testing on his own, and then publishes this, either privately or publicly, in order to force the offender to increase their security. Is this "preemptive security" or is this "reactive security"? If this is "reactive security", what exactly would you call "preemptive security"?

Wrong approach? (1)

RyuuzakiTetsuya (195424) | more than 3 years ago | (#32297450)

I just get this feeling like the approach is all wrong to security.

At the heart of the security concept is that CPUs generally aren't designed with security in mind. I blame Intel, ARM, Motorola, IBM, and anyone else I can. CPUs are just executing code they're told to execute. NX, ASLR, and other "security" features don't work. Particularly when the underlying architecture itself is flawed.

Well, no, IBM gets a pass. Given that the PS3 has yet to see a major exploit, I believe that the Cell may have security done right.

Re:Wrong approach? (2, Informative)

mrnobo1024 (464702) | more than 3 years ago | (#32297686)

The underlying architecture is fine. Ever since the 286 it's been possible to run code while limiting it to accessing only a specified set of memory addresses. What more is it supposed to do? It's not the CPUs' fault that OSes are failing so hard at the principle of least privilege.

They're just "executing code they're told to execute"? Well, of course - do you want them to refuse to execute "bad" code? If so, please show me an implementation of an IsCodeBad() function.

Re:Wrong approach? (0)

Anonymous Coward | more than 3 years ago | (#32298428)

http://www.cs.cornell.edu/talc/overview.html

Here you go. Idea comes from java, but taken to the assembly level.

There Is a Way to Eliminate All Bad Code (0)

Anonymous Coward | more than 3 years ago | (#32300526)

There is a way to discover bad code. It has to do with timing. If timing had been part of our computing model from the beginning, we would have no trouble identifying bad code. Why? Because they cannot help changing, even if slightly, the temporal signature of a software system. It's time to retire the Turing computing model because time is its main missing ingredient. It's all in the timing.

Read How to Construct 100% Bug-Free Software [blogspot.com] . The only problem is that we need a new computing model. The current one obviously sucks.

Re:There Is a Way to Eliminate All Bad Code (1)

Altrag (195300) | more than 4 years ago | (#32301582)

I'm assuming I'm misreading something here? The article is a little light on details but it sounds like all they're doing is constructing automated unit tests by brute-force running each possible input through an existing correct routine which can then be applied to later modifications of said routine. Which strikes me as remarkably bad in several ways:

- You need to know that the routine is correct in order for the "diagnostic subprogram" to create a correct set of tests. But if you already know its correct, why would you ever need to change it? If your answer is "requirements might change", then you've basically declared that the routine is no longer correct and that immediately implies your diagnostic tests are no longer correct, so any flags they might throw up can't be trusted and are therefore useless.

- Even for a computer, its impossible to test every input. For a single standard 32-bit integer you have 4 billion possibilities. A modern fast computer could probably perform a simple operation 4 billion times in a few hours. Chances are anything complex enough to fit under the article's purposes wouldn't also be considered a "simple operation". Never mind multiple inputs (two 32-bit integers as your input would make the test infeasible even on the fastest modern computer. And a string with no length restrictions can't possibly be fully tested via brute force on any finite system ever, since the number of possible inputs would be infinite). So you start having to come up with input classes (integers greater than 100 shouldn't be valid for a "percentage" for example). But to do that you need to know what that integer means. But short of following some VERY strict variable naming conventions, there's no way to do that. I could call it percent, or iPercent, or iPct, or p, or iDontLikeNamingMyVariables. Or heck I decide that I want tenths of a percent but decide floats aren't worthwhile so now my "percentage" can range from 0..1000 and the routine itself will implicitly divide by 10 (possibly by just plugging a decimal into the correct spot while drawing the number on a display -- no "real" division done). No automated routine can ever fill this in. Even a human would have trouble figuring out what I mean if I name my routine something like "PrintP10(int p)". A human would have to analyze the routine itself to see what its doing with the integer, and then possibly go back and analyze its context to see what the integer means if the routine itself wasn't obvious enough.

- Even should this somehow prove tractable, many "bugs" are in the design, not the code. Any sort of automated test routine like this can only find code bugs (which pretty much amount to crash bugs and formatting issues.. a really smart routine might be able to parse out printfs/DB access/etc calls and try to do some heuristic checking to see if %s's are being handled right, or SQL queries are being properly escaped to avoid injections, etc.. But heuristics aren't 100%.) On the other hand if I design a system to send a plain text password over an unsecured link, or write it to a directory that happens to be world-visible via the FTP site, again its not something that can be automatically detected (especially the latter -- your program might have been developed on a system that didn't even have internet access never mind a poorly-configured FTP.)

- UI elements tend to be horribly difficult to test automatically. All you can do is inject some mouse clicks and keyboard events and hope that the OS treats them the same as real mouse/keyboard events. Consider injecting events to click a button. They would probably consist of A) move the mouse over the button and B) fire a click event. Chances are the step A) will be invalid -- it will almost certainly consist of a direct positioning (ie: a cursor jump) whereas a real user would have to move the mouse across the screen. And moving the mouse across the screen may well hit some other control that has an OnEnter or OnLeave event trigger that in some way interferes with the "real" operation of your button but doesn't show up in testing. Of course to fix it you could always toss in a range of mouse movement events jumping only a little at a time, but that still doesn't catch the case where I'm distracted and spinning my mouse in little figure 8s for a minute or two before clicking on your button, or whatever. Or I've stepped away from the computer and the screen saver ends up interfering in some manner. Again, the possibilities are infinite and therefore no finite test can ever hope to cover them all.

What we need aren't more testing methodologies and ideals -- we've got loads of those of varying degrees of usefulness. What we need is languages and tools that let us reduce the amount of testing necessary. Garbage collection is HUGE in this area. Sure you still get null pointers here and there, but GC eliminates most of the concept of memory leaks, freeing up the programmer to worry about other things. Even if GC doesn't work as well as an expert's own memory handling routines, it still works well enough for 99% of the applications out there, and very few people are expert enough to actually do better (no matter what they claim).
Bounds checking is another great one, and its not even recent. It pretty much eliminates things like stack smashing attacks and those really annoying bugs where some routine somewhere misbehaves and cooks a piece of data that isn't (legitimately) used until 2 hours later -- often very hard to track down the original culprit! Bounds checking tends to be considered a waste of processor time by a lot of programmers, but unless you're REALLY in need of speed, the compromise is almost always worth it.

I think we need more things like that. Features designed into the languages and APIs that make it easy to do the right thing. Compare something like the old win32 CreateWindow() routine against the modern WinForms Form class. Under win32 I would have to create contexts and handles and set dozens of flags and class ids and whatnot. With WinForms its simply Form form1 = new Form();. Here's the first link [falloutsoftware.com] from Google searching for "opengl create window". A dozen pages of pixel format descriptors and other such nonsense. Why can't I just call GLCreateWindow(); and be done with it? I'm sure all of those options are useful to say, commercial game manufacturers, and they shouldn't be removed. But it would make a programmer's life a billion times easier if they didn't have to start their very first GL program with 300 lines of cut-and-paste code that they don't really understand from the first website they find on Google. There should be a nice simple API call that sets up a basic window using some generic defaults. If my program ever gets to the point that I'm running into trouble with the default "cAuxBuffers", then I can go back at that point and figure things out. This stuff should be something you only deal with if you have to -- not something you have to deal with before you even start the "real" work. API designers forcing you to go about it in the latter way is just opening up the world for bugs by people who just cut and paste the code without understanding what they're doing. And why on earth do I still have to create my own model loading routine? Pick a model format and standardize on it already! Pick a texture format and standardize on it. Pick an audio format and standardize on it. As noted above, I'm not suggesting that they remove the ability to do it myself if I need to, but all APIs should come with an "easy to use" mode where as many options as possible are hidden away as defaults so that people who don't need the extra power can just do things quick and easy and still be correct.

Re:Wrong approach? (1)

reiisi (1211052) | more than 4 years ago | (#32301756)

So, your suggestion is to get rid of CPUs?

Should I translate that to, let's convert all of our software to run on dedicated finite-state machines? One machine per program?

Classic Case? (0)

Anonymous Coward | more than 3 years ago | (#32297462)

To me this article just reads as another standard "We're doing it wrong" line with a whole lot of why, but not an iota or even smidgen of reference as to how to get to doing it right. What is with people who constantly have to harp on how we could be doing something different/better but can't actually come up with so much as a rough concept of how to do it different/better and use a lot of unrelated "why's" to get the point across?

Re:Classic Case? (1)

Beryllium Sphere(tm) (193358) | more than 3 years ago | (#32298910)

Getting rid of illusions is a valuable service even without proposing a fix, and it's not the job of the fire alarm to put out the fire.

Sometimes saying there is no solution frees up resources to adapt and cope. If you call the fire department and say "my potassium stockpile has caught fire", the best thing you can tell them is that they're need to fall back, protect other buildings, and let it burn itself out.

I still try to protect my clients, but part of that is to warn them that certain problems are unsolvable with today's mainstream tools.

It's about time, really. (1)

reiisi (1211052) | more than 4 years ago | (#32301826)

Bosses keep saying, why re-invent the wheel?

If our wheels are triangular and Microsoft keeps selling us on the idea that wheels are supposed to be triangular, then we need more people to tell it like it is.

Security is hard (2, Insightful)

moderatorrater (1095745) | more than 3 years ago | (#32297518)

Computer security is roughly equivalent to real-world security, only the malicious agents are extremely fast, can copy themselves at will, and can hit as many targets as they want simultaneously. When considered from the point of view of real-life security, our software security problems seem almost inevitable.

The central insecurity of software stems from the fact that security requires time and effort, which makes it hard to get management to fully commit to it, and there's nothing in the world that can make a bad or ignorant programmer churn out secure code. There have been solid steps taken that have helped a lot, and programmers are getting more educated, but at the end of the day security requires a lot of effort.

It's Time to Abandon the Turing Computing Model (0)

Anonymous Coward | more than 3 years ago | (#32298644)

Almost every major problem in computer science is the result of our infatuation with the Turing machine. The problem with the Turing computing model is that time is not an inherent part of the model. Timing is the key to solving the cyber security and reliability crises. Turing is the problem, not the solution.

Check out this short discussion [nitrd.gov] at the new Federal Cybersecurity R&D Forum.

Re:It's Time to Abandon the Turing Computing Model (1)

jolson74 (861893) | more than 3 years ago | (#32300994)

No, no, no! You've got it all wrong. It isn't the infatuation with Turing Machines that is the problem. It is the infatuation with *Networks*. Once we eliminate all means by which one computer can communicate with another we'll have the perfectly secure computer that we've all always dreamed of.

Actually, that might not go far enough. Some evil hacker terrorist-type might still be able to infect systems through software loaded from a disc or something! Better to just do away with the whole I/O system. I'm not sure why some folks in this industry are so infatuated with it anyways.

Re:It's Time to Abandon the Turing Computing Model (1)

reiisi (1211052) | more than 4 years ago | (#32301890)

You keep harping about the turing model.

What do you suggest to replace it? Magic?

A decision machine is going to behave like a Turing machine.

Period.

Analyzing where and how decisions are made is useful. Getting rid of the decision step itself is not.

Turing is not the problem.

The problem is the real world, and the fact that models are never the real thing, or there would be no reason to build models.

Inventing a new programing model that somehow avoids the Turing bottleneck (instead of postponing it or spreading it out) could solve the problem if you could solve the problem of solving problems without solving them.

The only way to escape entropy is to leave the mortal world.

QED, and if you don't understand why, well, anyway, the Turing model itself is not the problem.

Motivation (2, Interesting)

99BottlesOfBeerInMyF (813746) | more than 3 years ago | (#32297554)

Security can be widely deployed by enterprise IT, OS vendors, and possibly some hardware OEMs. The larger the footprint, the easier it is for such real security to be rolled out. The thing is, while some IT departments have very good security, just as many have terrible. Hardware vendors are unlikely to have the expertise and are unlikely to be able to profit using an integrated security platform as a differentiator. This pretty much leaves OS vendors. MS has a monopoly so they don't have much financial motivation to dump money into it. Apple doesn't really have a malware problem, with most users never seeing any malware let alone making a purchasing decision based upon the fear of OS insecurity. Linux is fragmented, has little in the way of malware problems, and has niche versions for those worried about it.

I'm convinced malware is largely solvable. It will never be completely eliminated by the vast majority could be filtered out if we implemented some of the cool new security schemes used in high security environments. But who's going to do it? Maybe Apple or a Linux vendor if somehow they grow large enough or their platform is targeted enough. Maybe if MS were broken up into multiple companies with the IP rights to Windows, they're start competing to make a more secure product than their new rival. Other than that, we just have to sit in the mess we've made.

Re:Motivation (1)

Dynedain (141758) | more than 3 years ago | (#32298120)

I'm convinced malware is largely solvable.

Not as long as social engineering is possible.

Re:Motivation (1)

99BottlesOfBeerInMyF (813746) | more than 3 years ago | (#32298336)

I'm convinced malware is largely solvable.

Not as long as social engineering is possible.

Social engineering relies upon deceiving a user and getting the user to authorize someone they don't know to do something they don't want. By making sure the user is informed of who is contacting them and what exactly that person is doing, as well as making sure something very similar is never required, yes we can eliminate pretty much all cases of social engineering as it is generally understood.

Re:Motivation (1)

JesseMcDonald (536341) | more than 3 years ago | (#32298480)

A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that. Consider Microsoft's new UAC system, for example—that's close to what you described, but users tend to either just hit "yes" as quickly as possible to get on with their work, or disable it entirely.

Re:Motivation (3, Insightful)

99BottlesOfBeerInMyF (813746) | more than 3 years ago | (#32299134)

A big part of social engineering is that users don't have the patience for the sorts of full explanations required to implement that.

Why would they need patience if you provide them with immediate verification of who they're talking to, if they're affiliated with who they claim, and if what they are doing is a normal procedure or something strange?

Consider Microsoft's new UAC system, for example—that's close to what you described,

No, not really.

but users tend to either just hit "yes" as quickly as possible to get on with their work

UAC is a study in how operant conditioning can be used to undermine the purpose of a user interface. It's a classic example of the OK/Cancel pitfall documented in numerous UI design books. If you force users to click a button, the same button, in the same place, over and over and over again when there is no real need to do so, all you do is condition them to click a button and ignore the useless UI. Dialogue boxes should be for the very rare occasion when default security settings are being overridden, otherwise the false positive rate undermines the usefulness. Dialogue boxes should be fairly unique and the buttons should change based upon the action being taken. If your dialogue box says "yes" or anything other than an action verb, you've already failed. Further UAC is still a failure of control. Users don't want to authorize a program to either have complete control of their computer or not run. Those are shitastic options. They need to be told how much trust to put in an application and want the option to run a program but not let it screw up their computer. Where's the "this program is from an untrusted source and has not been screened: (run it in a sandbox and don't let it see my personal data)(don't run it)(view advanced options)" dialogue box?

Transference of responsibility (1)

khasim (1285) | more than 3 years ago | (#32300446)

I'm convinced that the software companies intentionally fuck up the interfaces like that. That way they are not responsible if the user installs something bad.

And, exactly like you posted, the user will NOT read the pop-ups after the first few. All they will see is a "click 'yes' to continue" the same as they see on the EULA's and every other pop-up. The same as "do you want to run this".

A basic white list would be better for the users than the current situation. And pop-up a DIFFERENT box when the user is trying to install anything not on that white list.

Who makes the white lists? Why not the anti-virus companies? Yeah, I know about McAfee. At least this way they'd be more effective. If you want to install Civ9 and the anti-virus app checks the hashes and sees that it is legit, then no scary warnings.

It should be easier to keep a list of software from major vendors than to try to track every possible variation of every piece of "malware" out there.

mess who made? (1)

reiisi (1211052) | more than 4 years ago | (#32301898)

Bill Gates and Steve Ballmer made the mess, and I'm doing my best not to sit in it.

x86 (1)

blankinthefill (665181) | more than 3 years ago | (#32297570)

Considering that the x86 platform is inherently insecure, I don't understand why this is surprising to people. Until we move away from the architecture, I don't think someone who says they takes PC's security seriously is being as serious as they could be. And yes, I do realize that a new architecture is a huge change, and one that's going to be a long time coming... But it's something that WILL happen. We will eventually need to overcome the shortcomings of x86, and it's at that point that we can really start to take proactive PC security more seriously.

Re:x86 (1)

sexconker (1179573) | more than 3 years ago | (#32297704)

The ISA has nothing to do with it.
We're not talking about low level attacks, we're talking about the overall landscape at the top level.

We couldn't even get that shit right if we were given ideal hardware.

Re:x86 (1)

mrnobo1024 (464702) | more than 3 years ago | (#32297722)

Considering that the x86 platform is inherently insecure

How is it any more insecure than any other CPU architecture?

Re:x86 (1)

mikazo (1028930) | more than 3 years ago | (#32297758)

While placing a function's return address right next to its local variables and arguments on the stack is kind of a dumb idea, there are many higher-level security issues to work out that aren't specific to x86. For example, phishing, cross-site scripting, input validation, side-channel attacks, in-band attacks, and the fact that it's safe to assume the user is an idiot that will click on anything he's told to. The list goes on.

Re:x86 (1)

reiisi (1211052) | more than 4 years ago | (#32301918)

The other things are at least a little bit easier to deal with when the underlying execution model is stable.

Re:x86 will not fix passwords on post it notes / o (0)

Anonymous Coward | more than 3 years ago | (#32298396)

Re:x86 will not fix passwords on post it notes / other ways to get a good passoword and no a password that changes each week and locks out any thing that even looks like your past 10 ones is even more of joke.

Too Expensive (2, Interesting)

bill_mcgonigle (4333) | more than 3 years ago | (#32297626)

It may be that a secure and convenient system is possible, but it's too expensive for anybody to sit down and write.

Rather, we're slowly and incrementally making improvements. There's quite a bit of momentum to overcome (witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC) in any installed base, but that's where the change needs to come from, since it's too expensive to do otherwise.

If time and money were no object, everything would be different. More efficient allocation of the available time and money is happening as a result of Internet collaboration.

So, 'we're getting there' seems to be as good an answer as any.

Re:Too Expensive (2, Insightful)

0xABADC0DA (867955) | more than 3 years ago | (#32298828)

witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC

The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely. The SELinux policy for Fedora is ~10mb compiled. Although it does work pretty well at preventing escalation.

But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.

What's really needed is:

- A hardware device to authenticate the user. Put it on your keychain, shoe, watch, whatever.

- OS that grant permissions for specific objects based on user input, not to processes. If the user selected mydiary.txt from the trusted input dialog then the browser can read it. Otherwise it can't, or it has to ask permission to do so (OS puts up a dialog).

These two things could reliably cover the vast, vast majority all actual security needs, without hassles to the user, and without remote automated attacks. It wouldn't be perfect still, but it would be magnitudes better than what we have now. Unfortunately there's no mass market to provide a general purpose hardware device like that, and software would have to be modified slightly.

Re:Too Expensive (0)

Anonymous Coward | more than 3 years ago | (#32299174)

But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.

Umm, no; stopping a bad program from accessing "secret" data is precisely a kind of thing that SELinux is designed to do (when properly configured, which is indeed a major pain).

If the user selected mydiary.txt from the trusted input dialog then the browser can read it. Otherwise it can't, or it has to ask permission to do so (OS puts up a dialog).

Oh yes, that's a brilliant idea; we all know that when a user is bombarded with Yes/No pop-ups he takes the time to read and understand what is happening instead of just hitting yes to make it go away.

Re:Too Expensive (1, Insightful)

Anonymous Coward | more than 3 years ago | (#32300174)

Oh yes, that's a brilliant idea; we all know that when a user is bombarded with Yes/No pop-ups he takes the time to read and understand what is happening instead of just hitting yes to make it go away.

That's exactly the point though. If you have a secure file selection dialog and a secure Finder that grants permission to the files selected by the user, when would a user ever get a permission dialog? They'd never get one, so if one came up they would actually read it. It prevents malicious code from accessing a user's files, but doesn't get in the user's way.

Can you name a single real instance where a program needs to access a user's files that the user did not select?

Re:Too Expensive (1)

bill_mcgonigle (4333) | more than 3 years ago | (#32299378)

The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely.

Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.

The SELinux policy for Fedora is ~10mb compiled

For what, 14,000 packages?

OS that grant permissions for specific objects based on user input, not to processes. If the user selected mydiary.txt from the trusted input dialog then the browser can read it. Otherwise it can't, or it has to ask permission to do so (OS puts up a dialog).

Yes, a trusted object-based architecture is almost certainly part of the solution. Re-usable software components also allow particular code paths to be very well debugged and defended. I think we'll get back to an OpenDoc-type design eventually.

But, then Unix has no built-in functionality for this kind of thing. You'd need a system with the notion of objects, policies, and access vectors to do this right.

random thoughts from way out in left field (1)

reiisi (1211052) | more than 4 years ago | (#32301950)

if you put a lock on a box and leave the box in the middle of the highway, is the box secure?

I'm inclined less to access control lists (vectors, whatever) and more to ephemeral users (kind of like sandboxes for everything, but not really).

Re:Too Expensive (1)

Late Adopter (1492849) | more than 4 years ago | (#32301604)

The problem is two-fold. Under traditional Unix access systems, a process has the same privileges as the user who runs them. Otherwise every binary (or be more general, call it an application context) would need to have its own privileges, its own list of resources it can or can't access. Which is what SELinux does, which is why your policy files are so large.

But even then, even if you do THAT, you need a way to elevate. Occasionally you'll want your browser to have read access to your files, say you're uploading photos to a website. So you need an API for that app context to request more permissions, and for something with those permissions (the user, if necessary) to grant them temporarily.

The last bit exists in traditional Linux, it's called PolicyKit. But you still need the application-specific capability-based-restrictions a la SELinux. And you need support from every app you want to be able to run. There hasn't been a whole lot of forward progress here.

Re:Too Expensive (1)

b4dc0d3r (1268512) | more than 3 years ago | (#32299420)

we're not getting there. As an example, McAfee has an on-access scan. Any file read or written gets scanned.

A virus can disable that, so the workaround is to have a monitor program ensure the on-access scan is enabled.

That can be stopped, so you make the service un-stoppable.

That can be worked around, so the current solution is to have anther monitor connect to the policy server (for lack of a better term), download what the settings should be, and re-set all of the settings and re-start any stopped services.

See where this is going? Man in the middle attack, or setting up a route over a virtual interface, or whatever you need to in order to send the "disable" policy back. Now McAfee is stopped and you can't do anything about it.

Antivirus is so bloated that Adobe put Reader_SL.exe which opens and closes every Adobe Reader file and then quits. In theory the Antivirus program makes a list of scanned programs so they don't need to be re-scanned unless the file changes.

Not sure if McAfee works that way, which seems like a nice next target for hacking, but more likely the filesystem cache makes the scan seem quicker.

The overhead is disasterous with all of the redundant checks, and now you need a dual core processor to read your e-mail (Using Outlook on Vista I should specify).

The design is all wrong, the implementation is a non-winnable situation. The only thing we can do from software side is to spend time writing secure code, or a framework that ensures code is safe (like Java or .NET or other managed code type things). Even then, there are parts that need to be written at a low level. Mathematically proven security is only as good as the proof, so we're back to square one. You're vulnerable, and should assume so.

Re:Too Expensive (1)

bill_mcgonigle (4333) | more than 3 years ago | (#32299504)

I don't seem to have these problems on my Fedora systems. My parents don't seem to have these problems on their Macintosh.

Windows would be a poor example of making any progress.

Promise = 3rd party markets (0, Flamebait)

drumcat (1659893) | more than 3 years ago | (#32297770)

There is no market in safe, secure computing unless you're a closed system. Macs are generally safer, but are a closed system. Windows isn't, but Microsoft isn't in the business of selling hardware. They're in the business of helping hardware become obsolete in order to sell more software. Until it's in Microsoft's interest to be secure, why would they worry? They NEED computers to "break" every few years. See: 1970's American autos.

fueling global warming, hey? (1)

reiisi (1211052) | more than 4 years ago | (#32301968)

Come to think of it, we had less automotive pollution (overall, not in certain specific areas) in the '70s, too.

Parasites and Hosts (1)

handy_vandal (606174) | more than 3 years ago | (#32297828)

Biology might provide useful metaphors -- in particular, I wonder if the parasite/host relationships might provide insights into attacker/defender models.

Parasites evolve in response to defense mechanisms of their hosts. Examples of host defenses include the toxins produced by plants to deter parasitic fungi and bacteria, the complex vertebrate immune system, which can target parasites through contact with bodily fluids, and behavioral defenses. An example of the latter is the avoidance by sheep of open pastures during spring, when roundworm eggs accumulated over the previous year hatch en masse. As a result of these and other host defenses, some parasites evolve adaptations that are specific to a particular host taxon and specialize to the point where they infect only a single species. Such narrow host specificity can be costly over evolutionary time, however, if the host species becomes extinct. Thus, many parasites are capable of infecting a variety of host species that are more or less closely related, with varying success.

Host defenses also evolve in response to attacks by parasites. Theoretically, parasites may have an advantage in this evolutionary arms race because of their more rapid generation time. Hosts reproduce less quickly than parasites, and therefore have fewer chances to adapt than their parasites do over a given span of time.

- Source [wikipedia.org]

Re:Parasites and Hosts (1)

ColdWetDog (752185) | more than 3 years ago | (#32298990)

Biology might provide useful metaphors -- in particular, I wonder if the parasite/host relationships might provide insights into attacker/defender models.

Of note is that we are seeing some more sophisticated 'ecologies' of malware coming into view. Botnets that don't 'kill' the victim. Malware that kicks off other malware. However, evolution will 'accept' a less than perfect approach to an infection (ie, getting entirely rid of the thing) as long as the organism can get to successfully reproduce. I just don't think we're quite ready to accept (as a desired end result) a computer or process that has bits of nasty hung over it but just manages to piddle along.

We're really targeting a much more stringent set of requirements.

Not so much in ix and ux environments (1)

Dex1331 (1810146) | more than 3 years ago | (#32297850)

Not to bash MS but isn't it primarily Windows that is the worst culprit when it comes to IT security? No level of proactive security policies can possibly keep up with the vulnerabilities inherent in the MS OS's. With each iteration new vulnerabilities are discovered and exploited before MS even knows they exist and literally days or even hours after new releases. This is not the case in environments ix or ux. Until MS puts way more effort into securing their OS's, the world will continue to be a digitally dangerous place.

Re:Not so much in ix and ux environments (3, Interesting)

lgw (121541) | more than 3 years ago | (#32298768)

Modern Microsoft OSs aren't really any more "inherently vulnerable" than anyone else that might be viable in the consumer space. At this point it's more about getting the apps onboard with the security model. In the server space, Win2008 r2 gets most things right - just about everything is off by default, the kernel itself is quite secure, there's a good model for running as a non-admin and escalating when needed.

The biggest problems with Windows right now are apps that pointlessly need to run as admin, and apps that don't sandbox even narrower than "all the current user's data". All OSs are equally vulnerable to social engineering trojans - if you can trick the user into giving you the root password, you win - but outside of that Windows itself is only particularly weak in that a lot of the code is still new.

The real trick for security - for Windows and everyone else - is to adopt a model more like SE Linux where you just agressively limit what each app has access to. SE Linux is too hard to configure for the broad market, but a simpler approach where each app is sandboxed in a VM with just the resources it needs will shut down the "drive by" attacks involving flash, PDF, and similar apps. You can't do much about social engineering trojans, but you can fix the rest with sandboxing/jailing that doesn't require the end user to configure stuff.

The Web browser shouldn't be special in this regard - every app should be jailed automatically, requiring effort from app developers to broaden an app's scope, instead of the current model where app developers are asked to do extra work to narrow an app's scope.

Nonsense (0)

Anonymous Coward | more than 3 years ago | (#32297854)

I don't know what he's talking about. Computer security is perfectly RUSSIAN H4X0R YURI HAS 0WNED THIS POST @%$2$^^PO(@!#$^_@($Y^[NO CARRIER]

Par for the course (0)

Anonymous Coward | more than 3 years ago | (#32297908)

"The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected."

The biggest names in security make their living by telling you what you're doing wrong that what you do to fix them. You're going to spend money if you're scared. Everything from the largest security companies and even to pundits like Schneier spend most of their time Monday morning quarterbacking, telling you what you're doing wrong, and why you should listen to them but offer no real ways to predict how to secure yourself in the future.

Re:Par for the course (1)

DiegoBravo (324012) | more than 4 years ago | (#32301710)

> "The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.'"

The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, NOBODY WANTS TO PAY AN EXTRA BUCK, UNTIL THE DAMAGE IS DONE.

Attacks are cheap (2, Insightful)

ballwall (629887) | more than 3 years ago | (#32298280)

I think the issue is not that we're bad at security, it's just that attacks are cheap, so you need the virtual equivalent of fort knox security on every webserver. That sort of thing isn't feasible.

The lock on my house isn't 100% secure, but a random script kiddie isn't pounding on it 24/7, so it's good enough.

Re:Attacks are cheap (1)

jd (1658) | more than 4 years ago | (#32301192)

Not every webserver, but you do need that level on ever point of access (physical and virtual). So, this would mean the gateway firewall and/or router, plus the web proxy server(s), plus whatever proxy(s) the users used to connect to the outside world. The web proxies should be on a network that can only talk to the webservers and maybe a NIDS. This isn't overly painful because proxies are relatively simple. They don't involve dynamic database connections, they don't need oodles of complexity, the rights needed are very minimal, the kernel can be one that is very minimal. It's far, far, far simpler than securing the web servers themselves.

The web servers need securing, yes, but if your only access is via a proxy and the proxy is locked-down, and potentially insecure services are only run on ports that are internal (ie: not visible to the proxies), then the risks are restricted to vulnerabilities on the web server and any applications it runs. The rest of the system is effectively masked off. If each thread of the web server is running with the minimum privilege required for that thread, you should be ok. If that's not possible to arrange, then use Mandatory Access Controls to effectively raise the privilege required to do anything hazardous to something above what the web server has.

In other words, take the individual aspects of security and break them down into manageable chunks. One box, one chunk. One big nasty in security is the number of permutations you need to consider, but if you've chunked up the security, you reduce the problem in each chunk to just that chunk, and you reduce the problem of the network to the permutations between each chunk. Everything else is a black box.

No matter how complex the situation, you will always be able to draw circles around "islands" of complexity with few interactions between island. By treating each island individually and then the open seas between them as a separate problem, security becomes much easier. In some ways, this is how most physical security works - you protect the points of entry/exit in one way and protect the internal connections separately and distinctly. For a better example of a layered system, consider a castle with a moat/bailiff system for the outer wall and a keep and other fortified buildings on the inside for different tasks. It's not impossible to break in, but it is hard. But it is much easier to build such a system than to adequately fortify each of the interior buildings to the same level of protection as their fortification plus external fortification provides. By extension, continuing to layer should offer better security still - and, indeed, some of the toughest castles to have ever been built had two or more "outer walls". You could probably build one outer wall that offered the same level of protection, but at vastly greater effort and with all the risks the extra complexity creates.

developers' fault (1)

Lord Ender (156273) | more than 3 years ago | (#32298696)

In my experience, developers don't want Security anywhere near their products. We insist that they fix these "theoretical" and "academic" security problems, ruining their schedules and complicating their architectures.

Fine! Whatever. We will continue cleaning up your messes and pointing out the errors in your coding (which we warned you about). You can continue stonewalling us and doing everything you can to avoid us. We still get paid and you still get embarrassed.

Re:developers' fault (1)

Jaime2 (824950) | more than 4 years ago | (#32301128)

You're generally right, but I make it a point to write code with a watchful eye on things like limiting attack surface and granting least priviledge. I'm usually foiled by the people who implement my projects. For example, it drives me nuts that our implementers are willing to put passwords in plain text in config files when my platform offers a command line utility to encrypt them that is transparent to the application. Every time I'm involved in an implementation, the passwords get encrypted. By the time I get around to see the software again, someone had to troubleshoot the system or change the password and they left it in the clear.

The guys on the other side of the wall are always the idiots, no matter what side of the wall you work on.

Security is NP hard? (2)

istartedi (132515) | more than 3 years ago | (#32298798)

If you define security as being able to determine whether or not a program will reach an undesired state given arbitrary input, isn't that equivalent to the halting problem? Isn't that NP hard? I know that I generally force programs to halt when they're behaving badly, if they don't halt on their own.

Re:Security is NP hard? (1, Informative)

Anonymous Coward | more than 4 years ago | (#32301196)

[...] isn't that equivalent to the halting problem? Isn't that NP hard?

The halting problem is undecidable.

jail+fine the execs (2, Insightful)

dltaylor (7510) | more than 3 years ago | (#32298954)

Until there are negative consequences for the execs, there will never be IT security because it costs money. If the execs of companies that have IT breaches were jailed for a couple of years (hard time, not some R&R farm) and personally fined millions of dollars, they would insist on proper security, rather than blowing it off. 'Course, these are the same guys who schmooze with, and pay bribes to, "our" elected representatives, so that's never gonna happen.

"Security is not a product, it's a process", and, since there's no easily calculated ROI on the time spent on securing IT, even when there's a paper process, it is so frequently bypassed that it might as well not exist.

Re:jail+fine the execs (1)

trims (10010) | more than 3 years ago | (#32300626)

We don't even need to go this far.

The solution to both the Quality and Security issue in software is Strict Liability.

We need to make software accountable to the same standard we require for ordinary goods: no more disclaimers of harm, avoiding Warranty of Fitness, or any of the other tricks we currently accept as par-for-the-course in software.

Sure, it will slow down software development. But, we're no longer the Wild West here - we have a reasonable infrastructure in place, and it would help society as a whole to stop treating Software as some "special" case in product liability.

We've done this cycle innumerable times in (American) history - a new industry comes along, and everyone lets it run wild for awhile, until it looks like the industry is maturing (that is, has evolved some baseline standards, processes, and generally-accepted methods for conducting business). We then start to regulate the industry to get rid of the socially-harmful aspects which inevitably have cropped up.

Software is now out of its infancy and well into adolescence. Time for us to start treating it like an adolescent and quit indulging it's whims - time for some good firm regulation like we do with every other industry.

For all those nay-sayers: sure, it would cost more time/money to produce software. But, companies would spent more time on bugs and security than (mostly) useless features; because, let's face it, the vast majority of software nowadays is 99% feature-complete. We're just gilding the lily, feature-wise, at this point. And, as pointed out elsewhere, security and quality aren't something the market has decided to reward. Companies have pushed off their costs of lax security and poor quality onto society (gee, sounds like the financial industry), and we're all paying for it.

Right now, the Software and Finance industries both live by the mantra "Privatize profits, socialize costs". It's time that we decided this isn't appropriate anymore.

-Erik

Two problems fixing security (1)

BestNicksRTaken (582194) | more than 3 years ago | (#32299604)

#1. Getting management to say "OK we'll let the deadline slide, max out the budget and reduce some functionality/ease-of-use so we can fix the security flaws".

#2. Getting minimum wage Java programmers to understand/care about securing their code.

Things are not helped by the sad state of so-called security products by the likes of Symantec that seem very popular with PHB's, they must have a lot of sales reps hanging around golf courses.

Its also a bit much Google bitching about other people's security - wardriving streetview anyone?

What a fucking retard (0)

Anonymous Coward | more than 3 years ago | (#32300354)

Sandboxing is not reactive at all and should be the default setup for many applications, is this guy truly Goggle's security expert? What a joke.

Anyone with half a brain can avoid security threats, that people actively choose not to for imaginary convenience says a hell of a lot more about them than anyone else.

Have a fucking clue (the bar is much lower than people pretend) so you can avoid doing stupid shit like running Microsoft software and clicking anything and everything and allowing whatever to run.
Update god dammit, and ditch bs unreliable software.
And if it takes writing down your passwords (but only the passwords) in an obfuscated manner then by all means do so as long as you choose long hard passwords, keep them on your body (and if you can't stop pickpockets you have an issue that can be resolved by becoming a tiny little bit smarter).

Computer security isn't much harder than not letting in people claiming to need in to switch the entrance carpets: if they don't have a key don't give them access, it's not your problem but if you don't have a locked door in the first place (most computers and most computer "security") then smarten up or it makes no difference.

Some of us (like me) NEVER have uncontrolled security issues (and yes i browse porn thank you very much, and download movies and TV shows, but not without taking sensible precautions including choice of operating system), and it takes a bare minimum of PROACTIVE effort. Anyone fit to use a computer should be able to do the same if they actually find it important enough (but they don't).

Re:What a fucking retard (1)

FormOfActionBanana (966779) | more than 4 years ago | (#32301218)

Interesting points, but you are just a horrible person. Do you put on a balaclava and talk this way at dinner parties?

Completely depressing article (1)

FormOfActionBanana (966779) | more than 3 years ago | (#32300550)

I would hate to work in an environment where "it's hopeless, nothing we do today works" was the prevailing theme.

The Risk/Reward Ratio is Wrong (0)

Anonymous Coward | more than 4 years ago | (#32301958)

All the primary incentives are, still, tilted towards low-cost, high functionality, ease of use. The truth is that computers and software are now mass market devices, little different than sneakers or pencils. In spite of all the security problems people won't pay sufficiently and they won't tolerate the security gating and slowdown mechanisms. Not for the most part.

And there's huge denial or ignorance of what the goals of the bad guys are. I've heard many times, "I don't have anything of value on my computer. There's nothing to steal and therefore security isn't a problem." What these people do not appreciate is that ANY computer, especially on the Internet, is a valuable resource. It has the ability to work, participate in botnets, and attack others. It can just be a reservoir of malware code. There are thousands of reasons why every computer has value to the bad guys.

And it's not just security. People won't pay sufficiently for reliability either. In fact we had, and to some extent continue to have secure systems. They are called mainframes and minis, and their cost and slowness to evolve meant that their market share is much smaller than it was 30 years ago. We had reliable systems too. They were called Tandems, MVS SysPlexes, VMS clusters, and so forth. Those too were too expensive and inflexible for the market and suffered huge market share losses.

The answers are not unknown and commercial implementations have existed for years. The markets abandoned them in droves when simpler, cheaper, easier to use (and riskier) alternatives appeared.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?