×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

NSA Says Its Secure Dev Methods Are Publicly Known

samzenpus posted more than 3 years ago | from the nothing-special dept.

Government 114

Trailrunner7 writes "Despite its reputation for secrecy and technical expertise, the National Security Agency doesn't have a set of secret coding practices or testing methods that magically make their applications and systems bulletproof. In fact, one of the agency's top technical experts said that virtually all of the methods the NSA uses for development and information assurance are publicly known. 'Most of what we do in terms of app development and assurance is in the open literature now. Those things are known publicly now,' Neil Ziring, technical director of the NSA's Information Assurance Directorate, said in his keynote at the OWASP AppSec conference in Washington Wednesday. 'It used to be that we had some methods and practices that weren't well-known, but over time that's changed as industry has focused more on application security.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

114 comments

Here's proof that... (4, Insightful)

Anonymous Coward | more than 3 years ago | (#34190728)

...it is definitely possible to write secure software if you just simply follow sound and smart development methods and practices... and don't write half-assed, slipshod, thrown-together-in-a-hurry code.

I see it more like a proof that (1)

e065c8515d206cb0e190 (1785896) | more than 3 years ago | (#34190736)

security doesn't come from obscurity

Re:I see it more like a proof that (1)

Jeremiah Cornelius (137) | more than 3 years ago | (#34190756)

"Trust us. Honesty is our business."

-- Sincerely, the 'Spooks'.

Re:I see it more like a proof that (1)

LWATCDR (28044) | more than 3 years ago | (#34191192)

Programming is math. There really are no secrets in math. It is the same everywhere.

Re:I see it more like a proof that (1)

Jeremiah Cornelius (137) | more than 3 years ago | (#34191238)

Theory and Practice.

Different.

Re:I see it more like a proof that (2, Insightful)

K. S. Kyosuke (729550) | more than 3 years ago | (#34191290)

Actually, programming is one of the few disciplines where practice can be exactly the same as the theory - the bits and bytes are all the same, they don't break from material fatigue; and if you write software for which you have a proof of correctness, it will simply work correctly. Few other branches of human endeavor are free from the evils of the material world to such a degree.

Re:I see it more like a proof that (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34191374)

Unless you've implemented every bit of the software stack, from the firmware to the OS to the compiler/runtime to the applications then you could potentially have issues.

So I think the "theory and practice are different" might still apply, as nobody has the luxury of the time to formally prove all elements of all these things. It would be a Herculean undertaking.

Re:I see it more like a proof that (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34191930)

"So I think the "theory and practice are different" might still apply, as nobody has the luxury of the time to formally prove all elements of all these things. It would be a Herculean undertaking."

Even if you did have an infinite amount of time, there may also be errors in the proof. Even then, if all proofs are indeed 100% correct, one still runs into Godel Incompleteness [wikipedia.org] and what might be thought of as a variation of Heisenberg uncertainty [wikipedia.org] (i.e. the act of measuring changes the results), especially in hard realtime systems, where all possible permutations of the input stimulus can never truly be known.

Re:I see it more like a proof that (1)

LWATCDR (28044) | more than 3 years ago | (#34192130)

What?
You have taken two theories that you do not really understand and have mixed them up as bad as that stupid book Zen of Quantum Physics.
"Heisenberg uncertainty principle states by precise inequalities that certain pairs of physical properties, such as position and momentum, cannot be simultaneously known to arbitrarily high precision."
That has nothing to do with Godel's theorem except that both are taken as proof that the Universe is not deterministic. They are not in any other way related

And Godel does not imply that you can not prove that a program is correct. In fact if you read the wikipedia links you posted you would see that.

Re:I see it more like a proof that (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34192324)

First of all, I said that it could be thought of as a kind of Heisenberg uncertainty... and then qualified what I meant with a parenthetical, which alluded to the Observer effect [wikipedia.org] . This is known as desirable conflation [wikipedia.org] . Secondly, if you cannot figure out how Godel incompleteness comes into play in a discussion about using mathematical theory to prove an almost infinitely complex set of axioms, then trying to explain it to you is certainly a fools errand, even if I cannot prove it ;-)

Hint: From the very page you say says nothing about same:

"The second incompleteness theorem shows that if such a system is also capable of proving certain basic facts about the natural numbers, then one particular arithmetic truth the system cannot prove is the consistency of the system itself."

Re:I see it more like a proof that (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34192530)

Except that specific programs have a "finite" number of axioms.
Here is a trivial example.
You can write and prove a program that when fed a list of three integers of 32 bits or less will always return the largest. The range is limited so it is provable.
Most all programs deal with finite datasets and computers have a finite set of states. Godel deals with infinite systems.
Godel states that not every possible program can be proven it does not state that no program can be proven.
The key is to limit the input data. Once you do that the system can become deterministic and then provable.

Why Math isn't the same as Software (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34192722)

"You can write and prove a program that when fed a list of three integers of 32 bits or less will always return the largest. The range is limited so it is provable."

No, you cannot. You can prove that it should do so, but you cannot prove that it will always do so. For example, your assumption is that the interpreter or compiler behaves the way you think it does, when you in fact have operated on numerous unproved assumptions [bell-labs.com] .

Re:Why Math isn't the same as Software (0)

Anonymous Coward | more than 3 years ago | (#34193680)

I think you don't know the meaning of the word 'proof'. From your perspective, nobody can ever prove anything at all.

In reality, all deductive logical proof relies on assumptions. Whether you believe the proof partially relies on whether you believe the assumptions. You may as well worry about random bit errors in the hardware, as whether the compiler is reliable. Or whether the hardware was fabricated correctly. Or whether reality is but a dream of mine, and you're not real.

So, you can argue about the reliability of the assumptions, but you are wrong to say it is not a proof.

Re:Why Math isn't the same as Software (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34194792)

"I think you don't know the meaning of the word 'proof'. From your perspective, nobody can ever prove anything at all."

I know the meaning of the word proof. You just don't know the meaning of the word Software, which is my whole point. Math and Software are not the same, amd mathematicians should stick to Math, and Software Engineers should worry about Software rather than Mathematical theory, as the two are quite different indeed.

Re:Why Math isn't the same as Software (0)

Anonymous Coward | more than 3 years ago | (#34197548)

heh, yes, I know the meaning of software. While I agree software isn't *all* about math, a large part of it is. You can not have a proof without starting with "unproved assumptions", as you put it. Once again displaying that you don't know the meaning of the word.

Re:Why Math isn't the same as Software (1)

HeckRuler (1369601) | more than 3 years ago | (#34196520)

Or hardware fails. When an instruction to add 1 to the PC register simply doesn't, it can cause everything to fall apart. That's bad, but it's obvious. Worse still is when data gets a random blip. Now the program continues to function, but the output is wrong. And not necessarily obviously wrong. Just wrong enough to propagate.

If you are operating in the real world you must work on a system of probabilities. What's the acceptable rate for this thing to fail? If it can get kicked over weekly without fuss, then git'er'done. If it has to have five nine up time, you'd best have redundancy and fail-overs. If you're calculating the Xth digit of PI, then you need to something to check your memory for errors in real-time.

This is a hard lesson for com sci majors, but the real world isn't theoretical. I think a simple robotics course would really hammer this into their heads.

Re:I see it more like a proof that (0)

Anonymous Coward | more than 3 years ago | (#34192658)

You can talk about "Zero Kelvin", "Godel Incompleteness", "Heisenberg uncertainty" and other "geeky concepts" all you want. As long as you don't understand them, your assertions will continue to be nonsenses.

Re:I see it more like a proof that (1)

K. S. Kyosuke (729550) | more than 3 years ago | (#34197670)

Unless you've implemented every bit of the software stack, from the firmware to the OS to the compiler/runtime to the applications then you could potentially have issues.

Actually, it's even better to use your own CPU - possibly something based on MachineForth. Mind you, modern CPUs have bugs as well.

Re:I see it more like a proof that (1)

Stray7Xi (698337) | more than 3 years ago | (#34192164)

Actually, programming is one of the few disciplines where practice can be exactly the same as the theory - the bits and bytes are all the same, they don't break from material fatigue; and if you write software for which you have a proof of correctness, it will simply work correctly. Few other branches of human endeavor are free from the evils of the material world to such a degree.

I disagree. If you're programming the OS it might be true (with narrow hardware compatibilities). However as soon as you write an application for a user, theory is useless. Users do the strangest things to their OS. One user might throw away all RST packets at the firewall because they read about sandvine when comcast was throttling. Another user tried to fix his own windows box, deleting important windows registry keys, so explorer freezes randomly. Another user overmounted a directory over /etc, so now there are users logged in that don't exist in /etc/passwd. All of these will break even the most basic assumptions an application programmer would have.

If your theory is broad enough to cover real-life scenarios with real screwed up people then "In theory" is the same as "In practice". But if your theory is that good then there is no point in testing software.

Re:I see it more like a proof that (1)

LWATCDR (28044) | more than 3 years ago | (#34192444)

You are confusing a mathematically correct program one that will do the "right thing" no matter what the input is.
What a correct program will do is only behave in a deterministic way.
And example would be is if Explorer was a "correct" program and you deleted some registry keys it would exit with an error message. If a program finds any input state that is outside of a specified range and that can not be healed should terminate with an error condition. That if the OS was also "correct".
The real problem becomes one of cost. Very few applications are worth the cost of doing all the work to make it "correct". You will see in aerospace applications but almost no where else.
And even FOSS software has cost. The programmers will often pay a large amount of the cost themselves. But even the users will have to pay some of the cost in that it will take a lot longer to get the software and any new features.

The other cost is that the system must be "correct" from top to bottom. That is why the Shuttle doesn't use an I7 and why it takes so long and costs so much to get software and hardware upgraded in those type of systems.

Re:I see it more like a proof that (1)

RMH101 (636144) | more than 3 years ago | (#34194436)

Or "In theory, theory and practice are the same. In practice, they are different"!

Re:I see it more like a proof that (0)

Anonymous Coward | more than 3 years ago | (#34194578)

s

(re: /. Filter error: You can type more than that for your comment. Thanks. I don't want to)

Re:I see it more like a proof that (4, Insightful)

Applekid (993327) | more than 3 years ago | (#34190828)

That's a closed source/open source distinction. It has nothing to do with development methodology... except that there are more eyes when it's open.

Depending on whose eyes for closed source, I'm pretty sure the NSA has plenty of great eyes looking over code.

Re:I see it more like a proof that (1)

blair1q (305137) | more than 3 years ago | (#34191736)

Disagree. Having a correct methodology is more efficient than having many extra eyes that aren't following any particular methodology.

The trick is having a correct methodology, and applying it correctly.

Re:I see it more like a proof that (1)

TheRaven64 (641858) | more than 3 years ago | (#34196002)

except that there are more eyes when it's open

There are potentially more eyes. In practice, a lot of open source code is only ever read by its author, or occasionally by the person committing it if that's not the same person. In contrast, NSA code will go through a code review process that ensures that several people look at it.

A significant part of the point of pair programming is that it ensures that at least two people have read the code, which is one more than a lot of code gets (open or closed).

Re:I see it more like a proof that (5, Insightful)

icebike (68054) | more than 3 years ago | (#34190968)

security doesn't come from obscurity

Exactly right.

The best security is the kind where everyone knows how it works, but even given the source code, you can't beat it, or you can't beat it in any useful length of time.

That being said, the automated code inspection packages you can buy these days look only for the obvious noobie programmer mistakes.

SELinux, originally from NSA, solves many of the problems of running untrusted code on your box, but even that is not 100% secure, and the maintenance problems it introduces mean that it is seldom used in real life.

The problem is not how this agency (the NSA) cleans up their code.

The problem is that we don't know about what backdoors exist in our hardware and our operating systems. Because so much code is embedded in silicon, and so few people actually look at that code, its easy to imagine all sorts of pownage living there.

A compromised Ethernet card (just sayin by way of example), would be both Obscure, and hard to detect, and have access to just about everything going in and out of your machine.

Security does not come from obscurity, but insecurity often does.

Re:I see it more like a proof that (1)

Locke2005 (849178) | more than 3 years ago | (#34191024)

Why would a compromised Ethernet card be any more dangerous than a compromised Ethernet cable, which presumably their networks are designed to protect against? In other words, wouldn't all data that the Ethernet sees already be 1024-bit encrypted?

Re:I see it more like a proof that (2, Informative)

icebike (68054) | more than 3 years ago | (#34191076)

Because the card has smarts, and the cable does not.

Because the card lives on your bus, and the cable does not.

But try not to belabor the point, as I said, it was just an example. Substitute any other device resident in your computer which you feel better demonstrates the point.

Re:I see it more like a proof that (2, Interesting)

Jah-Wren Ryel (80510) | more than 3 years ago | (#34191906)

Because the card lives on your bus, and the cable does not.

More specifically most devices on the bus can do DMA to host memory, that enables them to read and write any byte of memory, completely bypassing OS memory protection.

In fact, firewire ports are a favorite of the digital forensics guys for exactly this reason - they can come along, plug their dohickey into the firewire port of most any PC that has one and do a complete memory dump of the system without the OS or any other program even noticing.

Re:I see it more like a proof that (1)

blueg3 (192743) | more than 3 years ago | (#34193508)

Except that that technique is not widely used, since it's extremely prone to failure (usually resulting in a blue-screen or such). As a fragile technique that requires a specialist on hand when you encounter a live machine, it doesn't see a whole lot of field use.

Re:I see it more like a proof that (0)

Anonymous Coward | more than 3 years ago | (#34196602)

Unless your OS is using the IOMMU correctly at which point it can't actually read the entire memory space.

Re:I see it more like a proof that (2, Insightful)

jvkjvk (102057) | more than 3 years ago | (#34195466)

Security does not come from obscurity, but insecurity often does.

Security comes in many forms, and obscurity is actually quite a good form, as long as there are other layers.

The "best" security comes from defense in depth and obscurity can certainly be part of that, and in fact probably should be. I will go through a few different layers where security by obscurity actually works quite well.

Consider a random binary on Usenet. Even if I 'encrypt' the payload with ROT-13 I have achieved a decent amount of security simply through obscuring the target in a sea of ones and zeros.

Now, consider the challenge-response system. It used to be that some systems would tell you whether your username, password (or both) was bad. It turns out that this lack of obscurity allows attackers quicker access to systems, since they can hit upon usernames by letting the system tell them which were valid. Simply obscuring the error response fixes this.

And this brings up a good point about obscurity as a security practice. If you use it - don't tell anyone! You would have thought this was a "duh", but the previous example is a great one in that regards. Usernames are simply obscured data, but if the login system can be used as an oracle ... not so much.

Now, on a systems level, network security is often predicated on obscurity - that's why you don't find many companies publishing their internal network maps! If those maps were published, then attackers would have a much easier time to penetrate the organization. Security by obscurity.

Now, on a home level, if I am using port knocking (as one example) as one means of controlling access to ssh, then every attacker that does not know this will fail out of the box. Of course, it is better if I also have key exchange turned on, and even moreso if that and password enabled. But, even moreso if I simply run it on a non-standard port - which is security thorugh obscurity.

So, while I wouldn't rely strictly on security through obscurity (except in cases it makes sense), it is a valuable tool in a security toolbox, and generally can be a show stopper for an attacker if they aren't able to obtain the knowledge. But again, security comes from defense in depth, and one layer of that depth should be considered obscurity.

Regards.

Re:Here's proof that... (1)

mrheckman (939480) | more than 3 years ago | (#34190896)

...it is definitely possible to write secure software if you just simply follow sound and smart development methods and practices... and don't write half-assed, slipshod, thrown-together-in-a-hurry code.

Proof? I don't see any proof in the article that the NSA produces secure software, or even a claim that they do. Instead, the NSA Technical Director quoted in the article said "even within the NSA, the problems of application security remain maddeningly difficult to solve." That doesn't sound like they have solved the problem, but that they, too, are grappling with a fundamental issue in software development.

Re:Here's proof that... (1)

Mr. Freeman (933986) | more than 3 years ago | (#34191752)

It's not necessitate proof of anything. Are the NSA's applications actually bulletproof? They might distribute their coding practices but I'm pretty sure they don't distribute their source code or their applications. Therefore, no evaluation of their security can be made. Therefore, there's no evidence to show anything about the quality of their practices.

I'm not saying they're wrong. In fact, evaluation of other, open, software indicates that security does stem from good coding practices. I'm just saying that there's really no reason to point to the NSA as an example of being quality.

Re:Here's proof that... (1)

phek (791955) | more than 3 years ago | (#34191968)

actually you're wrong, they do distribute the source code to their applications whenever they can (their code is often just modifications to proprietary software at which point they can't redistribute it). SELinux is a good example of this, it was started and originally released/maintained by the NSA.

There is absolutely no reason (other than copyright violations) for the NSA (or any other government agency) to not release more secure methods/code. Doing so will provide our nation with a more secure infrastructure making their job easier. Things such applications to break security are a different subject though.

I'm sure they have plenty of those applications which they don't want released to the public so that people don't know how to protect against that.

frist psot (0)

Anonymous Coward | more than 3 years ago | (#34190732)

the nsa is pants

Re:frist psot (0)

Anonymous Coward | more than 3 years ago | (#34191460)

This post is now diamonds.

Re:frist psot (0)

Anonymous Coward | more than 3 years ago | (#34192650)

Sega cds are now in super nintendos.

Of course they say that (5, Insightful)

dkleinsc (563838) | more than 3 years ago | (#34190742)

If the NSA has something that really is Schneier-proof, they wouldn't tell the public. And understandably so, since part of their job is in part to ensure signal security for US agencies that deal in classified information.

Re:Of course they say that (4, Insightful)

hedwards (940851) | more than 3 years ago | (#34190946)

But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found. And that's from a group of mostly volunteers. Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

What's really embarrassing is that most of it has been known about for quite some time, but for one reason or another the organization funding the programming doesn't feel like paying for it to be done securely. It's a similar problem to programming style.

Re:Of course they say that (3, Funny)

lewiscr (3314) | more than 3 years ago | (#34190994)

Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

Windows XP?

Re:Of course they say that (2, Informative)

LWATCDR (28044) | more than 3 years ago | (#34191156)

Security doesn't sell in the consumer market.
Mainframe and Minicomputer OSs and applications tended to be very secure. VMS and IBMs OS where and are very secure. PCs come from the microcomputer world. Security was never an issue with them. I mean they where single users systems and almost never networked. Even when you had Networks they tended to be Lans.
It is the mind set that security is an after thought. Why should a picture viewing program ever worry about an exploit?

On the PC side it just was never a "feature" worth putting any effort into until recently.
 

Re:Of course they say that (0)

Anonymous Coward | more than 3 years ago | (#34192684)

Security doesn't sell in the consumer market.

For 2 good reasons...

  1. Security is diametrically opposed to ease-of-use.
  2. General consumers don't understand what "security" truly means. They think "secure" means "system won't let determined user install spyware". No system can be "secure" like that so they have no faith in the notion and this amplifies Point #1.

General consumers will get more accustomed to proper computer usage as time goes on, but will never truly be hardened as secure users. Just won't happen. For this reason, viruses will always exist and antivirus companies will continue to stay in business.

Re:Of course they say that (1)

LWATCDR (28044) | more than 3 years ago | (#34192750)

I do agree with number 2.
Not so much on number 1.
Yes if you run a program and then give it admin access to your system it is not a software security issue. It is a user IQ issue.

Re:Of course they say that (1)

jonwil (467024) | more than 3 years ago | (#34192884)

In the consumer space (Windows specifically) security is often at odds with backwards compatibility. (including compatibility with things at the other end of a network link)

I am suer there are hundreds of changes to Windows that could be made that would make Windows more secure except that it would break backwards compatibility so the changes cant be made.

Re:Of course they say that (1)

Zero__Kelvin (151819) | more than 3 years ago | (#34192938)

"PCs come from the microcomputer world. Security was never an issue with them. I mean they where single users systems and almost never networked. ... On the PC side it just was never a feature" worth putting any effort into until recently."

Unless when you say "recently" you mean since 1993 [slackware.com] then you are quite mistaken.

Re:Of course they say that (1)

LWATCDR (28044) | more than 3 years ago | (#34196136)

Please there was Unix before Slackware for the PC. And no security wasn't a feature that sold or was even much of a worry even in 93.
Security as a feature that sold in the consumer space? Windows 95, 98, ME all show clearly that it wasn't. Windows 2000 and XP where big leaps forward. Vista is ME 2.0 and will soon be forgotten. Seven is much better.
BTW Unix in 93 also was not all that secure. No ACLs and telnet and FTP where commonly used and SSH wasn't even released until 1995! and then it took a while to catch on.
As I said it wasn't really a feature worth putting any effort into until recently for the consumer space.
Linux is not strong even now in the consumer space outside of embedded applications.
So yes I stand by every word.

Re:Of course they say that (3, Insightful)

drsmithy (35869) | more than 3 years ago | (#34192636)

But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found.

Remote vulnerabilities. In the default install. Which isn't that hard to achieve when your default install doesn't really do much and hardly anyone uses your system.

Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

Who didn't have to work towards any specific deadlines or goals ? And had essentially nothing to lose if they didn't get there ? I'd expect much the same.

When you have nothing in particular to achieve, all the time in the world to achieve it, and no real consequences if you don't, then you'd expect anything that was done would be done well. However, the real world doesn't work like that.

Re:Of course they say that (0)

Anonymous Coward | more than 3 years ago | (#34194954)

But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found.

Ouch! How did THIS get modded Insightful?

The truth is that OpenBSD has had several vulnerabilities in pretty much every release: just check out the errata. OpenBSD 4.7 [openbsd.org] , for example, had two security fixes applied to it; 4.6 and 4.5 had three each; 4.4 had four; 4.3 had eight; 4.2 had nine; 4.1 had ten; 4.0 had eleven; and so on. And that's not counting reliability fixes.

That said, these holes are either local, or limited in their impact; the two holes that were eventually found in OpenBSD were *remote root* holes. (On a side note, it did not take "a full decade" for one of these to be found: it was about 5 years.)

Now, none of this is intended to rag on OpenBSD, BTW; the developers are doing a great job, and their diligence in actually responding to vulnerabilities and promptly issuing fixes is laudable. If anything, it shows that even when you're very diligent and extremely focussed on security, vulnerabilities will STILL happen: not just the rare huge ones but also many run-of-the-mill smaller ones.

Re:Of course they say that (1)

TheRaven64 (641858) | more than 3 years ago | (#34196074)

It's worth noting that the OpenBSD folks regard potential DoS attacks as security holes, so things that cause a crash but no possibility of an attacker gaining control or accessing more than they should are counted as security holes. This makes their list of security fixes bigger than it would be with some other projects, which only regard this kind of problem as a reliability issue.

Re:Of course they say that (1)

MozeeToby (1163751) | more than 3 years ago | (#34190958)

It's just software, we're not talking about something that takes billions of dollars worth of resources to produce. If the couple hundred software guys that the NSA employs can think of something, you can put good money on at least one of the hundreds of thousands of software guys that don't work for the NSA coming up with a similar idea. Now, if we were talking about some novel decryption scheme sure, there aren't that many people working on that outside of intelligence circles. But we're talking about writing secure, consistent software, something that is of interest to every CS and CE professor in the world.

Re:Of course they say that (1, Interesting)

jd (1658) | more than 3 years ago | (#34191046)

It depends. The best place to hide something is in plain sight. And the best way to hide encrypted somethings is in a sea of equally encrypted somethings. If the NSA had some algorithm that they felt OK with others knowing and also using themselves, then any traffic of theirs using said algorithm would be indistinguishable from any other traffic. An attacker would need to decrypt everything in order to establish whether or not anything was being sent that was of interest. Even if there was a vulnerability in the encryption that reduced the search space to something theoretically manageable, having to break each and every single conversation on the Internet would push the search space back into the unmanageable region.

ObSidetrackingRant: This is why sites that use SSL should use SSL for everything - it adds noise which conceals the encrypted packets which would actually be of interest. Don't forget that the biggest weakness in secure systems is context. If you have enough context, you can bypass a lot of system security. A simple example would be the "secret question" systems that are popular. If you know enough about a person, the odds are high that you can guess what the answers are. Another example would be social engineering - if you have enough personal information, you could pretend to be that person to a system admin. Social engineering is really the sum total of all the new cracking/viral methods that are being used these days. Far from being new, it's merely better-automated and better-documented. Social engineering was standard back in the BBS days.

Re:Of course they say that (1)

phek (791955) | more than 3 years ago | (#34192006)

it may just be me, but as someone who has been a sysadmin and developer for high traffic sites, making everything on a site https isn't practical at all. https uses a LOT more resources than http. you would roughly need 3 times the number of servers to hide something that's already encrypted. a MUCH better solution would be to use only strong, non-anonymous ciphers for your encrypted pages.

Most... (2, Insightful)

mbone (558574) | more than 3 years ago | (#34190768)

It's that word most in "Most of what we do..." that may be important here. Most doesn't mean all. Also note he did not mention their cryptographic techniques, which is where I would expect them to be especially advanced.

Re:Most... (2, Informative)

hedwards (940851) | more than 3 years ago | (#34190982)

But cryptographic techniques aren't where most vulnerabilities are found. Most vulnerabilities are ones which could be avoided using secure programming practices.

In fact the FBI failed to break into a set of hard disks encrypted with Truecrypt and another program using 256-bit AES. Which pretty clearly indicates that as long as you choose an appropriate encryption algorithm, the vulnerability is almost always going to be in either the implementation, user error or in access to the machine.

Re:Most... (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34191892)

Unfortuantely, "secure programming practices" often put the keys, including the master keys for replacing other keys, under NSA control. Take a good look at "Trusted Computing" and who would hold the keys. There was never a procedure enabled for keeping the keys entirely in private hands, only for keeping them in central repositories such as those owned by Microsoft. And never a procedure published for requiring a court order to obtain the keys: the entire thing was handwaved.

Looking back further, the "secure" Clipper Chip was discarded when it was discovered that people could generate their own private keys, without any central repository access. (http://en.wikipedia.org/wiki/Clipper_chip).

The NSA's mandate is not to provide security for US citizens. It is, in fact, to *break* security to monitor foreign communications. (Go read their original charter, available at http://www.austinlinks.com/Crypto/charter.html) One of their most effective techniques is to assure that commercial encryption worldwide is entirely accessible to them, and their history of ensuring this by blocking encryption they don't "p0wn" is very clear.

Re:Most... (1)

StikyPad (445176) | more than 3 years ago | (#34191486)

Also note he did not mention their cryptographic techniques, which is where I would expect them to be especially advanced.

From a design standpoint, it's cheaper and more effective to leverage solutions that can be widely vetted and tested than it is to work strictly in a closed environment implementing your own solution and hoping you thought of everything. I'd frankly be *very* surprised if the NSA had anything more than potential (if promising) avenues of exploration with regards to "next-gen" encryption, and certainly wouldn't expect that they'd be using untested solutions in the field.

I may or may not know what I'm talking about, but then, the person quoted in the article may or may not have been spreading misinformation. ;)

practical application (3, Insightful)

Tom (822) | more than 3 years ago | (#34190776)

Security, especially in software development, doesn't suffer from the "we don't know how to do it" problem. It suffers from the "we don't have time/budget/patience/interest in doing what we know we should be doing" issue.

Re:practical application (1)

faichai (166763) | more than 3 years ago | (#34190894)

Of course. Though budget buys time, which buys patience and 911 pretty much secured interest. Oh look: http://www.upi.com/Top_News/US/2010/10/28/US-intelligence-budget-tops-80-billion/UPI-37231288307113/ [upi.com]

Re:practical application (1)

H0p313ss (811249) | more than 3 years ago | (#34190934)

Of course. Though budget buys time, which buys patience and 911 pretty much secured interest.

Perhaps in some parts of government, particularly security oriented agencies like military, CIA, FBI and NSA. But I'd bet that in the majority of government and business 9/11 had little to no impact on security considerations for IT projects. It has certainly not impacted my software projects, and they've been sold to a whole plethora of government agencies and fortune 500 companies.

Re:practical application (1)

bhcompy (1877290) | more than 3 years ago | (#34191678)

Well, security also generally means performance hit, so it's also a balance of performance and security. When you're wiretapping half the nation looking for those keywords that send up red flags you gotta be lightning fast

Re:practical application (1)

Tom (822) | more than 3 years ago | (#34195118)

Mostly nonsense. Unless you are doing some really insane crypto, work with embedded systems or have unusually high requirements (some realtime applications), the performance impact of security largely doesn't matter.

Doesn't make sense (1)

clarkkent09 (1104833) | more than 3 years ago | (#34190850)

Despite its reputation for secrecy and technical expertise, ... virtually all of the methods the NSA uses for development and information assurance are publicly known.
 
  Secrecy doesn't have to extend to every single thing. I'm sure NSA uses regular toilets too, not the top secret kind. As for reputation for technical expertise, how does using tried and tested development methods goes against that?

Re:Doesn't make sense (4, Interesting)

hedwards (940851) | more than 3 years ago | (#34191000)

I suspect it's more along the lines of people expecting there to be something significant that they have for writing secure code. I'm willing to bet that the only thing they have that most other organizations don't have is a substantial budget for auditing the code for vulnerabilities. They probably wait longer before deploying code as well until it's been thoroughly vetted.

Re:Doesn't make sense (1)

blair1q (305137) | more than 3 years ago | (#34191818)

I'm willing to bet they have a code base that's been fully developed using secure methodologies.

Most people don't.

Re:Doesn't make sense (2, Interesting)

failedlogic (627314) | more than 3 years ago | (#34192182)

In a corporation, you not only have accounting, HR, managers, VPs and such looking over your budget but you also have investors. If it costs you too much to produce something of "equal" quality to a competitor, they will start asking questions. A problem with insecure code probably won't cost the company the entire business.

The NSA, I think, mostly has a black budget. There's only a few people who know how much, where and to whom (employees) this money goes to. So there's not really a budget you have to account for. A problem (leak) because of bad code or anything else could be damaging to National Security. It will also, likely, become a political embarrassment and one to the DoD, NSA/CIA establishments. The people who approve the budgets will almost undoubtedly approve expenses to account for increases in security in any area incl. programming.

Re:Doesn't make sense (2, Insightful)

jd (1658) | more than 3 years ago | (#34191114)

If we start with the fact that the NSA is responsible for the Rainbow Series, partly responsible for Common Criteria, totally responsible for the NSA guidebook on securing Linux, and also totally responsible for the concepts in SELinux (remember, they talk about methods not code), it follows that the NSA is implying that the processes used to develop this public information are rigorous, sound and the methods the NSA use internally for projects they don't talk about. It actually doesn't say that what the NSA publishes is what they use - they only say that methods that are public are what they use. The source is implied.

And yet we live in the non-ideal real world (0)

Anonymous Coward | more than 3 years ago | (#34190854)

Ziring said that even within the NSA, the problems of application security remain maddeningly difficult to solve.

"Very few applications start from a clean slate. They're built on the existing code bases and they have to work with other existing apps and they have to be updated frequently."

Re:And yet we live in the non-ideal real world (2, Interesting)

jd (1658) | more than 3 years ago | (#34191202)

Not starting from a clean slate is immaterial. A new component can be 100% self-contained (and therefore verifiably clean within itself), communicating via some intermediary layer that handles legacy APIs, network connections, pipes, shared memory, et al.

The new component can therefore be as provably secure as you want. Security holes will then be contained (they must be in pre-existing code and cannot spread into new code).

This is not often done in the business world because they're stupid and prefer to burn huge amounts covering their backsides when inevitable breakins occur rather than the relatively small extra needed to properly secure systems in the first place. (It's stupid because such a method can never be cost-efficient in the long-run and only looks very marginally better on the books in the short-term.)

Re:And yet we live in the non-ideal real world (1)

pavon (30274) | more than 3 years ago | (#34191474)

Except that many, if not most, security holes come from the interactions between components, and are not contained within any single component. This is exasperated by the fact that many legacy systems don't have well-defined interfaces or their behavior does not meet the documentation. When you run up against a hole caused by crappy documentation, you can pound the table all you want about how the bug is in the legacy system, but the fact is that the introduction of your code made the system as a whole less secure. In the end that is what matters - the entire system, not just your part - and it is hard to build a secure system with buggy parts.

Social engineering always wins (4, Insightful)

boristdog (133725) | more than 3 years ago | (#34191040)

The Soviets almost never used to crack codes. They just social-engineered (blackmail, sex, gifts, schmoozed, etc) to get all the information they wanted.

It's how Mitnick did most of his work as well.

Re:Social engineering always wins (2, Insightful)

blair1q (305137) | more than 3 years ago | (#34191804)

But that's expensive, slow, and labor-intensive.

Trojan bots are cheap, easy to distribute, and hard to double against you.

Security is about preventing unintended outcomes (3, Insightful)

Anonymous Coward | more than 3 years ago | (#34191150)

Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

Some simple things that coders can do:
- avoid the use of pointers.
- Initialize all variables to known values.
- Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison
- When you are done with a value, reset it to a "known" value. Zero is usually good.
- Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

Simple.

BTW, I wrote code for real-time space vehicle flight control systems. When I look at OSS and see variables not set to initial values, I cringe. Sure, it is probably ok, but there isn't any cost to initializing the variables. This is a compile-time decision. Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded. Not all OSes do this, so if you are writing cross platform code, don't trust it will happen. Do it yourself.

Oh, and if you want secure programs, loosely typed languages are scary.

Re:Security is about preventing unintended outcome (3, Interesting)

HomelessInLaJolla (1026842) | more than 3 years ago | (#34191284)

Initialize all variables to known values

And remember to reset them to known values as soon as they are no longer necessary. Not only is it good practice, whether or not the compiler has a job, but it encourages the programmer to keep his variables in mind.

Re:Security is about preventing unintended outcome (0)

Anonymous Coward | more than 3 years ago | (#34193514)

Introductory programming advice is now informative on Slashdot? This place really has declined...

Re:Security is about preventing unintended outcome (0)

Anonymous Coward | more than 3 years ago | (#34194238)

That only applies when you're using an inferior language. Variables should initialize to zero by themselves and go out of scope when I'm done with them.

Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison

Static variable? You probably mean put a constant or expression on the left of the ==. That's awkward to read. How hard can it be to distinguish = from == from ===? I'd say if you have trouble keeping those apart, you should switch to COBOL.

Re:Security is about preventing unintended outcome (3, Informative)

Jahava (946858) | more than 3 years ago | (#34195402)

Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

Some simple things that coders can do: - avoid the use of pointers.

Pointers aren't themselves bad; they just add some layers of complication to the otherwise stack-oriented game. The only reason the stack is nicer than pointers is because they're implicitly managed for you.

Rather than avoid pointers, what you need is good code structure. Design functions that either manage the lifecycle of a pointer or are explicitly clear about how and what the pointer is going to be used as. Use const aggressively, and avoid typecasting as much as possible. Using good pointer naming techniques and management functions also dissipate the burden. Pointers are too useful to avoid religiously ... rather, build pointer security and management techniques into your coding style from the ground up. Choose descriptive names and try and constrain each pointer to its specific type (this lets the compiler help you keep your pointers straight).

Initialize all variables to known values.

Meh, I'm divided on this one. It's one thing to explicitly initialize global variables to either zero (which costs nothing, since they just end up in BSS sections) or non-zero (which puts them statically in the data segment). Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few. My general rule of thumb is to be smart about it and, once again, naming conventions.

Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison

Great tip; it's weird at first writing "if( NULL != p )", and you get a few funny stares, but after seeing enough "if( i = 10 )"s lying within seemingly-functional code, it's an easy selling point to make.

- When you are done with a value, reset it to a "known" value. Zero is usually good.

Definitely do this with pointers, descriptors, and other handle types. It also makes cleanup and pointer management easier. Less important to do with things like iterators and intermediate variables.

- Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

It's a good rule of thumb. I would like to add "any time you can't do this, make absolutely certain that you're not doing it for a good reason."

Good tips, though. One thing I'd like to add: -Wall -Wextra -Werror (or your language's equivalent). If your code can't compile without a single warning, then you need to re-write your code and either manually disarm situations (e.g., override the compiler's common-sense with an assurance that you know what you're doing) or fix the warnings, which are actually bugs and errors. It's always fun to take someone's "bulletproof" code and turn on these flags and watch the crap spill out. Warnings are amazing, and they are absolutely your friend when it comes to writing bug-free and secure code.

Re:Security is about preventing unintended outcome (2, Informative)

TheRaven64 (641858) | more than 3 years ago | (#34196234)

Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few.

Unless your compiler really sucks, it will perform some trivial dataflow analysis and not generate code for the initialisation if the value is never used. Even really simply C compilers do this. If the value is used uninitialised on any code paths, then the initialisation will be used (although it may be moved to those code paths), and you don't want the compiler to remove it.

From the flags you recommend, I'm guessing that you use gcc, which not only does this analysis it will even tell you if the value is used uninitialised.

Re:Security is about preventing unintended outcome (1)

Kjella (173770) | more than 3 years ago | (#34196080)

Some simple things that languages can do:

- Have all variables initialize to known values. I mostly program in C++/Qt and QString, QByteArray etc. don't need initialization. All numbers should initialize to 0, all pointers to NULL.
- Don't make the difference between assignment and comparison be a simple typo. If I were to design a language, "=" would not be a valid operator. ":=" for assignment, "==" for comparison. (You could keep all the "+=" etc. but not plain "=")
- Smarter scoping hints, like letting you call a function and *pass* the variable, which ends its scope.

Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

In my experience that is not practical and leads to too many artificial functions. But you should try reducing the complexity of how many different variables get involved. It's easy to read a three page function if only things are scoped out properly so only the important variables stay in scope. E.g.

void longFunction()
{
int foo = 0;
{ // 10 lines of code to calculate foo, some various temp variables etc.
} // Here only foo is left in scope
}

Re:Security is about preventing unintended outcome (2, Interesting)

TheRaven64 (641858) | more than 3 years ago | (#34196174)

Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded

Any compliant C compiler will initialise all statics to 0 (stack values are different - they are not automatically initialised). From the C99 spec, 5.1.2:

All objects with static storage duration shall be initialized (set to their initial values) before program startup.

From 6.7.8.10:

If an object that has static storage duration is not initialized explicitly, then:

  • if it has pointer type, it is initialized to a null pointer;
  • if it has arithmetic type, it is initialized to (positive or unsigned) zero;
  • if it is an aggregate, every member is initialized (recursively) according to these rules;
  • if it is a union, the first named member is initialized (recursively) according to these rules.

You can guarantee that any standards-compliant C implementation will do this. You can't guarantee anything about an implementation that doesn't comply with the standard - it may deviate from it in other ways.

Formal development methods (1, Informative)

Anonymous Coward | more than 3 years ago | (#34191280)

One cornerstone of secure software development is the application of formal methods. The NSA Tokeneer project has been made completely open-source, demonstrating the feasability of applying formal methods to secure development problems.

I knew it! (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34191380)

They're using Agile practices! They just developed them before anyone else, about twenty years ago!

Incidentally, this also explains why they haven't done any groundbreaking work in twenty years... ~~~~

It doesn't matter (1)

metrix007 (200091) | more than 3 years ago | (#34191918)

This won't do anything to convince all the people who believe that the NSA can zoom in and enhance bad quality photos to a 10000 times. Despite it not being possible, the government probably has secret technology. Sigh.

No surprise (1)

Teunis (678244) | more than 3 years ago | (#34193844)

For anyone who's read security posts on this site - all too often NSA folks pop up and respond :)

(and are frequently very helpful)

d('w')b (0)

Anonymous Coward | more than 3 years ago | (#34194590)

if you believe this tripe, you're clearly a fool.

the article is just a fluff piece for the so called war on cyber-terrorism (which is in itself just an excuse to exert greater control over the interwebs) -- "hey guys all our secrets are in books, please stop looking to see if we have any more! 'cause we really don't! honest!"

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...