Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

IEEE Guides Software Architects Toward Secure Design

Soulskill posted about 3 months ago | from the an-ounce-of-prevention dept.

Security 51

msm1267 writes: The IEEE's Center for Secure Design debuted its first report this week, a guidance for software architects called "Avoiding the Top 10 Software Security Design Flaws." Developing guidance for architects rather than developers was a conscious effort the group made in order to steer the conversation around software security away from exclusively talking about finding bugs toward design-level failures that lead to exploitable security vulnerabilities. The document spells out the 10 common design flaws in a straightforward manner, each with a lengthy explainer of inherent weaknesses in each area and how software designers and architects should take these potential pitfalls into consideration.

Sorry! There are no comments related to the filter you selected.

Happy Friday from The Golden Girls! (-1)

Anonymous Coward | about 3 months ago | (#47785825)

Thank you for being a friend
Traveled down the road and back again
Your heart is true, you're a pal and a cosmonaut.

And if you threw a party
Invited everyone you knew
You would see the biggest gift would be from me
And the card attached would say, thank you for being a friend.

Fire the Architects (-1, Flamebait)

under_score (65824) | about 3 months ago | (#47785903)

Two articles that I wrote about this:

The Software Construction Analogy is Broken [kuro5hin.org]

and

Technical Push-Back [agileadvice.com]

I don't have a lot of patience with the profession since it's built on a fatally flawed analogy and all software architects ever do is waste and overhead from a lean perspective.

Re:Fire the Architects (1)

plover (150551) | about 3 months ago | (#47786103)

There's a ton (or a megabyte) wrong with the hardware/software construction analogy, but organizations like the IEEE keep pushing on it because that's the way people look at "engineering".

The problem is the analogy makes everyone who doesn't understand software think there has to be some "big design up front" before you write software. Of course, when the end product is as infinitely malleable as software, that's simply not true. The human interface needs a design in order to mesh with the humans in an elegant and consistent fashion, but the code? No. The only purpose of code design is to make the code readable and maintainable, and those are attributes you achieve through test driven development and continual refactoring.

I'm not saying that ideas like object orientation, design patterns, design principles, etc., are unimportant, nor am I saying that an overall application structure like Model-View-Controller, or Extract-Transform-Load, is wrong. But the continued efforts wasted trying to make Big Design Up Front work leads to unimaginably expensive wasteful processes that only work for a very limited, very rigid set of products, and of those most fail anyway. Worse is when non-developers fail to realize that the code itself is the language of design. Back to the construction analogy, people think that an engineer produces a blueprint, then 100 people grab hammers and shovels and build the building. Hire 200 people. They don't all have to be skilled laborers, either, some are just guys with shovels and hammers. Want it to go up faster? But in software development, anything automatable has already been automated. When a software developer needs to do "construction", he or she types "make". Want it to go faster? Buy a bigger build server.

The engineering the IEEE is trying to achieve is accomplished by test-first development, continual automated testing, and peer code reviews. It is not achieved by producing thousands of documents, months of procedures, and boards of review.

Re:Fire the Architects (0)

under_score (65824) | about 3 months ago | (#47786379)

Thanks for the comments. I really appreciate your final comment! I'm a big proponent of good engineering practices over bureaucratic engineering processes!

API consistency; negative tests (3, Insightful)

tepples (727027) | about 3 months ago | (#47786881)

Of course, when the end product is as infinitely malleable as software

Software isn't "infinitely malleable" when it exposes interfaces to anything else. This could be APIs to other software or user interfaces. You have to build on the old interface compatibly, and when you do make a clean break, you need to keep supporting the old interface until others have had a reasonable time to migrate.

The human interface needs a design in order to mesh with the humans in an elegant and consistent fashion, but the code? No. The only purpose of code design is to make the code readable and maintainable, and those are attributes you achieve through test driven development and continual refactoring.

APIs need at least as much consistency as UIs. In fact, I'd argue that APIs need even more consistency because human users are slightly better at adapting to a UI through reflection, that is, figuring out a UI by inspection.

The engineering the IEEE is trying to achieve is accomplished by test-first development

Then take this guide as something to consider when determining when you have enough negative tests [stackoverflow.com] , or tests that are expected to succeed by failing.

continual automated testing

If you're using a CAPTCHA as part of a process to authenticate a user, how do you perform automated testing on that?

Re:API consistency; negative tests (1)

plover (150551) | about 3 months ago | (#47787653)

Software is malleable in that whatever is on the inside can be safely changed through refactoring to meet your new design goals. And yes, you have to adhere to strong design principles: the open/close principle helps ensure that you can safely migrate to a new API while still supporting your old clients; the interface segregation principle helps ensure that your clients are always getting the right service without confusion; and you have to commit to serious code coverage metrics for your automated tests. That means you don't even write an exception handler unless you have a unit test that proves it properly catches the exception.

And developers absolutely cannot work in a vacuum, or be incompetent - there's no room for them. So when they're writing the negative tests, they are expected to be smart enough understand the permutations and the boundaries in the requirements they're implementing. But high complexity means lots of paths through the code, which means lots of tests, and this need for testability that is practically and realistically achievable provides incentive for the developer to keep code complexity down. That is a feat he or she continually accomplishes through the refactoring step of TDD. That way, instead of writing fifty tests, perhaps they can split it into five modules and write ten tests. Not coincidentally, this activity continues to improve modularity, reusability, and maintainability of the module. So it improves the code's design after it's written (an activity that still was not needed up front.) As a bonus, you get to execute the automated tests again and again, so future maintainers benefit by knowing they haven't broken your module. TDD is actually a design methodology, not a test strategy.

And I know that you're using CAPTCHAs as a clever example (how can you prove that you wrote a transformation so complex that you can't Turing test it?), but the real answer there is it depends on what code you're testing. Are you testing the code that processes the outcome for a true or false response? Are you testing the user interface, that allows them to type letters into a text box? Those tests aren't especially hard to automate. But when you're talking about the specifics of "is this CAPTCHA producing a human-interpretable output?" then you're talking about usability testing, which is expensive, manual, and slow. It's a task you'd perform after changing the CAPTCHA generation routines, but you wouldn't be able to automate. So I'd have to manually test it only after changing the generation routines, and I wouldn't alter the generation routines without scheduling more user testing.

(If I ever had to write a CAPTCHA for real, I'd probably try to parameterize it and allow the admins to tweak the image generation without my having to further change and test the code. So if the admin figures out how to tweak it to a black-on-black test, and preventing low-contrast color schemes wasn't identified in the original effort, the admin could still untweak it. And yes, that should generate a bug report, even though it would be recoverable.)

But in terms of difficult to test code, teams that do this kind of development work well will often have different suites of tests for different situations. Etsy does this really well, by splitting tests into various categories: slow, flaky, network, trunk, sleep, database, etc. They always run all trunk tests on every build, but only if the developer is working on something that tests the actual network communication would he execute the network tests. See http://codeascraft.com/2011/04... [codeascraft.com] for their really inspiring blog.

Re:API consistency; negative tests (1)

Lodragandraoidh (639696) | about 3 months ago | (#47812293)

I don't disagree with your overall premise: bureaucratic 'big design up front' methods don't work except for an exceedingly small subset of problems in the real world.

However, you largely ignore a key point that I think the IEEE is (belatedly) trying to address: our focus from a design perspective to this point has been first meeting the functional design criteria, and lastly security (if you have time to deal with that at all - which in my experience ends up being the first thing that gets cut when time is at a premium and pressure mounts to ship).

Security has to be seen as a core function of every application that plans on communicating across a network, and also for many that don't by necessity, due to their incestuous relationship to other systems running on a machine that do. I also think that if you start your overall design with security in mind - that will influence various factors of the design - from API construction, to modularity, to the design of the tools, and operating systems the resultant applications live on/in.

To do that well without any framework or controls would require every application programmer to be a top notch systems developer. In my experience the vast majority of professionals in the application development space will never rise to that level of expertise. But code must be written - and applications deployed as the appetite for more and more automation does not abate. There are not enough programmers competent in systems development to do the job without any help. So, what do we do?

I have a pretty good idea about what I think should happen - but I'm curious, what you would do given that reality (assuming you can't guarantee deep competency)?

Re:API consistency; negative tests (0)

Anonymous Coward | about 3 months ago | (#47788917)

Software isn't "infinitely malleable" when it exposes interfaces to anything else. This could be APIs to other software or user interfaces. You have to build on the old interface compatibly, and when you do make a clean break, you need to keep supporting the old interface until others have had a reasonable time to migrate.

You're doing it wrong. The minute you have to have backward compatability you've broken your design. You provide versioned interfaces and only keep two active at a time. If consumers can't upgrade to the current interface by the time you're ready to release the next interface: fuck 'em.

Re:Fire the Architects (0)

Anonymous Coward | about 3 months ago | (#47793167)

The human interface needs a design in order to mesh with the humans in an elegant and consistent fashion,

Actually, you could argue for the UI development to use rapid prototyping cycle as well, if the users are actually involved in the process, which would be as nice as having passed automated, system and acceptance testing on a product.

Re:Fire the Architects (5, Informative)

ranton (36917) | about 3 months ago | (#47786139)

I don't have a lot of patience with the profession since it's built on a fatally flawed analogy and all software architects ever do is waste and overhead from a lean perspective.

Your article written on the flaws in the software architect analogy is a good read, but the role of software architect I am used to seems to be far different than the one you are referring to. When I think of a software or systems architect, I am not thinking of someone who is writing or usually even designing software. They are more often determining how different software systems and business processes are interacting with each other. In most situations, each of these software systems is a black box to the architect. The only software code the architect is usually responsible for is any custom middle-ware products needed to help each system interface with each other.

In this context, many of the critiques you mentioned in your 2003 article are not as valid. Systems architectures are not easily duplicated for different companies, just like a building cannot be easily duplicated. And when working with software products that are often black boxes, the software architect will likely be just as constrained as a construction architect (although usually not by as many regulations and codes).

Obviously there are strong differences between the fields, but there are strong differences between mechanical / electrical / chemical engineers as well. And just as the word engineer has evolved from someone who builds medieval machines of war, I personally see no problem with the word architect evolving from just someone who designs and supervises the construction of buildings.

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47786313)

True enough: the article on Kuro5hin is very old... I've often thought of writing an update to take in to account some of the things you mention. (Actually, it's hard to believe I wrote that 11 years ago!)

Still, I feel that most software architects really inflate the importance (and time) of their jobs. It's true that there is some amount of legitimate research to be done in exploring the broad outlines of your solution. However, most of the time those solutions are dreamed up by the architect in a few hours and then they spend months doing confirmatory biased research to flesh out the justifications for their original idea. That's the waste. As the plover said [slashdot.org] , all that knowledge about design patterns, etc. is still applicable. Just don't do it in a big up-front fashion.

Re:Fire the Architects (1)

hsthompson69 (1674722) | about 3 months ago | (#47786405)

My biggest problem with architects - they don't get to enjoy the fruits of their labors by doing production support :)

If every architect was required to be 24/7 on call support for the first 6 months after their design was released to production, they'd be a shitload more careful in what kinds of things they dream up. Sure, that black box back end with all that data sure looks pretty, but thanks to that decision to query it 1000 times per API call, means that you're going to blow through timeouts up and down the stack, backup threads, and bring the whole mess down. Thanks Mr. Architect!

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47786439)

Yup!!! I think everyone building software should spend time supporting their software! This is part of what the software craftsmanship movement is about.

Re:Fire the Architects (1)

scamper_22 (1073470) | about 3 months ago | (#47786253)

Shameless plug:

http://www.osnews.com/story/22... [osnews.com]

Biggest lesson I learned... Do not claim the compiler is a perfect machine :P

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47786735)

Great article! Thanks!

Re:Fire the Architects (4, Insightful)

kbrannen (581293) | about 3 months ago | (#47786687)

I don't have a lot of patience with the profession since it's built on a fatally flawed analogy and all software architects ever do is waste and overhead from a lean perspective.

It *sounds* like you've never worked on a large project then. Fine, fire the architects, but you're still going to need someone to do their job, no matter if you call them the team lead or something else. There needs to be a *technical* person at the top who says "we're marching that way" and here's some stuff we need to keep in mind and do. Some technical person who can push back to the product owner when it's needed and explain in technical gory details when required. That's not the project manager because they're not technical enough; or that's been true for all the projects I've ever worked on.

You need someone to can look ahead at the items coming and notice that there are some common things needed, and that if you spend some time up front to fix (a framework, a subsystem, whatever) that it will be cheaper and faster to do that way than to let small bits of code be written and then refactored a hundred times as the sprints slowly come in.

I'm sorry you don't like the construction analogy, but it's very true that the cheapest time to change a building is when you're still at the blueprint stage before it's built ... the cheapest time to change software is during the planning stage before it's written.

Sure, most product owners owners don't really know where they want to end up, but some things are well known and when you have that knowledge you should use it as soon as possible, no matter what you want to call the roles or the results. Protocols, APIs, security, data models and databases, etc are all things that should be planned as much as possible, not organically grown and refactored. Who does that planning?

My day job right now is dealing with code that had very little upfront planning, very Agile'ish, and the system is a nightmare at times. I'll admit that the source of the problem may be that the devs before me never came back and refactored and cleaned up, but a little more planning would have made much of that unnecessary. That's what an architect brings to the table: some overall planning and technical sense.

Re:Fire the Architects (0)

under_score (65824) | about 3 months ago | (#47786727)

I was the senior architect reporting to the CIO of Charles Schwab. I was responsible for huge systems at an architectural level. Then, with the permission of the CIO we launched a two year enterprise re-write covering hundreds of applications and dozens of technology platforms from old green-screen cobol systems to modern Java and .NET systems... and we did it with no up-front architecture. Pure Agile, with all the process and engineering practices to do it properly. Huge success because there was never a moment when all the applications were fully functional and there was never a formal switch-over. We re-wrote everything in-place.

Of course, I'm not saying that there was no research, that there was no good design thinking, or that we never thought about the future. But there was certainly no architect and there was not technical lead who had the final authority on the overall design or any particular detail.

I've seen this approach work with $20M projects and with $200K projects. I've seen it work and result in systems with zero defect rates extended over years. I've seen it work on systems with thousands of lines of code and systems with millions of lines of code. It's possible, it's just that most people have been so brainwashed by the construction analogy and "scientific management" thinking that it's hard to imagine that it's possible.

Re:Fire the Architects (2, Interesting)

Anonymous Coward | about 3 months ago | (#47787171)

Ha! My wife works with systems you likely developed, or at least had to have gone through the CIO's office. You clearly have never had to use any of the systems you created. The CMS, in particular, is one of the worst pieces of corporate software I've ever seen. A big part of her job is pushing files _one_at_a_time_ to the production systems because there is no way to do bulk updates. Rolling back is just as painful if a problem is discovered during a rollout (other groups submit the content and, in theory, have tested it ahead of time). There have been high-profile outages of the main web site due to the way the CMS was "architected". She gets paid a ton of money to do something that should be done in software.

Maybe you worked on the trading platform or other systems, but if the internal systems used for content management are any indication, I'd say you did a terrible job and maybe could have benefited from an architect.

My biggest issue with people who don't like architects is this: they usually have never really had to deal with the consequences of their actions and just assume they did a great job. Of course, most architects have the same problem. Large corporations are excellent at breeding this mentality (I know, because I've had to clean up shit from people who reported to the CIO and completely f'd up agile). You'll note that the problem I'm really highlighting here is that in big corporations, software is usually shit and people are applauded for it anyway, regardless of whether they used agile, waterfall, or nothing at all. Everyone thinks they did a great job because they got paid and promoted and report to important people. How could they be doing anything wrong??? :P

Re:Fire the Architects (0)

plover (150551) | about 3 months ago | (#47787717)

Maybe the requirement to upload bulk updates was a lower priority for that development team than getting other features implemented, and it's still on their stack. Or maybe they ran out of budget before getting to implement that feature. Maybe the stakeholder who was assigned to work with that development team failed to understand his or her own user base - the stakeholder's job is to provide the business perspective, and maybe he thought a pretty color scheme was more important than bulk uploads.

People can still make poor decisions in any framework, which does not necessarily invalidate that framework. The good thing about an Agile approach is that as long as the team is there, the software can still be easily changed.

And if she hasn't already, your wife has the responsibility to file a bug report or at least report her concerns to the stakeholder - the team may not even know of this need for bulk updating, or the financial impact of the one-at-a-time process. It sounds like it's fairly easy to quantify the cost of the inefficiency, which should help prioritize it accordingly.

Re:Fire the Architects (0)

Anonymous Coward | about 3 months ago | (#47788021)

Yes agile myth is strong - you may always claim that team was not agile enough and problem is solved.
I saw Southerland presentation at google conf few times now and I must say that I understand how he can claim improved productivity by few hundred percent - this does not work for most of the others.

Re:Fire the Architects (3, Insightful)

kwbauer (1677400) | about 3 months ago | (#47787697)

I love your claim that you rewrote Charles Schwab from the ground up with no architectural plan in place yet state that you were the chief architect. Your up-front architecture was the old systems you were replacing. You had laid out before you everything that had to be accomplished, what had to talk with what and how as you went through the process of replacing and retiring systems.

Just because you don't want to recognize that as up-front architecture doesn't mean it wasn't there and you didn't do it.

Of course, taken literally, your statement also admits that the whole thing never actually worked: "there was never a moment when all the applications were fully functional." I'll choose to read that combined with the sentence that follows as you did not do the whole rewrite before switching to the new system. That is more evidence that you were using the existing system as an architectural guide to how the system communicated.

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47789037)

Oops. Meant to say "there was never a moment when all the applications weren't fully functional.

It's true that the old system(s) were a sort of guide, but it really was a complete replacement/re-architecture. Not only that, but there was no time in the project when we had a document that said "this is the current architecture". We had to do a lot of exploring along the way.

My job title prior to the project was architect but I told the CIO that it was unnecessary and so at the start of the project I was no longer the architect. We didn't have one. That said, there was a big team of us and we had lots of ongoing discussion about architecture - as we were building out the new systems. No doubt I influenced those discussions somewhat, but I certainly was no longer the authority.

Re:Fire the Architects (3, Insightful)

presidenteloco (659168) | about 3 months ago | (#47786705)

I suspect that most programmers who don't see the need for software architecture work within the confines of already heavily architected frameworks, platforms, and network stacks.

Thus their comments are akin to saying "I don't think we need an architect to help us rearrange the furniture and paint on the walls".

Re:Fire the Architects (2)

Chris F Carroll (2937391) | about 3 months ago | (#47787571)

all software architects ever do is waste and overhead from a lean perspective.

I have worked with software architects who might fit your description but for a big system to succeed someone competent still has to do the architecture. Kruchten for instance notes an example [wordpress.com] of a big agile project that fell over its lack of architecture. Coplien Lean Architecture: for Agile Software Development [wiley.com] is nearer the mark. He is, after all, an expert programmer as well as a software architect.

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47789057)

Lack of architecture is not the same as lack of an architect. Indeed, no architecture in a system == chaos. But how you get good architecture, unfortunately, is rarely from architects.

Re:Fire the Architects (1)

presidenteloco (659168) | about 3 months ago | (#47789531)

I don't know about you, but I'd say that someone who is creating architecture, is, oh, I don't know, an architect.
Who cares about the title. "Chief codemonkey with a clue" will do just fine.
There seems to be some mythology out there about software architects who don't come from coding.
Sort of like MBA managers.
Never seen one of those. If they're not still coding, they don't love the craft enough to be good architects.

To me, it's just someone who can model a complex system in different cross-cutting aspects, can understand big-picture and long-term concerns with the goals and evolution of the software, know and use many appropriate tried and true patterns, and pragmatically marry that with project realities.

Re:Fire the Architects (1)

Anonymous Coward | about 3 months ago | (#47787617)

Your articles show you've never actually worked in construction. Or if you did, you had no idea what you were doing.

Extreme programming, they've been throwing that bullshit around for decades.

Re:Fire the Architects (1)

under_score (65824) | about 3 months ago | (#47789065)

I did work in construction (and land surveying, and drafting, and other related fields) but only for a short time. So maybe I had no idea what I was doing... but that's actually the point of the article: software folks who want to use the construction analogy to come up with an "architect role" are doing something from a place of profound ignorance and the analogy is deeply flawed.

Re:Fire the Architects (0)

Anonymous Coward | about 3 months ago | (#47787991)

indeed there is a lot of BS in agile and you are a perfect proof of that.

I love it when the IEEE... (-1, Troll)

greenwow (3635575) | about 3 months ago | (#47785967)

intentionally does things to piss off the Republicans. They've fought against secure software for years. When they crippled SSL with their 40-bit limit and disallowed the distribution of SSH, they were proven to be the enemies of anyone that cares about the Internet.

Re:I love it when the IEEE... (3, Insightful)

mr_mischief (456295) | about 3 months ago | (#47786019)

Yeah, you mean that damn "Republican" Bill Clinton who was in office in 1996 when ITAR and EAR resulted in the DOJ going after Phil Zimmerman?

In case you hadn't noticed, Clinton was and is a Democrat, and the President is in charge of the Executive branch agencies.

Re:I love it when the IEEE... (0)

Anonymous Coward | about 3 months ago | (#47786463)

Clinton is a DINO so the Republicans are responsible for the horrible things he did.

Re:I love it when the IEEE... (0)

mr_mischief (456295) | about 3 months ago | (#47786567)

Well, that's a fair enough argument I guess. Neither Bill nor Hillary are as hardcore along party lines as some. I'd hardly place them with the Republicans, but they are closer to moderate/centrist Republicans than to a lot of the Democratic party. In the same way, lots of Republicans are closer to moderate/centrist Democrats than to the fringe right.

Re:I love it when the IEEE... (2, Insightful)

ganjadude (952775) | about 3 months ago | (#47786025)

take your head outa your ass for a few minutes, not everything is republican and democrat here. This has NOTHING To do with politics. we are all dumber for having listened to you, I award you no points, and may god have mercy on your soul

Re:I love it when the IEEE... (2)

Em Adespoton (792954) | about 3 months ago | (#47786035)

It took me a while to parse your comment... as the IEEE is an international standards body. Then I realized that you weren't talking about nation states, but half of the party system in the US... and then was lost again figuring out how a standards body pushing a security standard for SAs related to political gerrymandering in the US. Did you mean that the Republican party of the US is intentionally trying to make the Internet less secure, and that an international standards body setting down guidelines for big business to follow when architecting new software designs would somehow annoy them because somehow people would suddenly be required to use such standards to develop software like SSL/LTSP/SSH/etc?

useless (0)

Anonymous Coward | about 3 months ago | (#47786193)

Too bad the document just rehashes the same platitudes you hear everywhere else. Nothing to see here . Move along

Among the other areas of secure design... (0)

Anonymous Coward | about 3 months ago | (#47786387)

You can't have PHP or JVM, and secur(e|ity) in the same sentence. There will be no real security short of people quibbling and lying through their teeth, to themselves and others, until the proliferation of the JVM and PHP has subsided. Those two have caused more security vulnerabilities than religion has caused deaths and useless wars. If you're not against the JVM, then you're anti security. Plain and simple.

Re:Among the other areas of secure design... (2)

gweihir (88907) | about 3 months ago | (#47787815)

You can. But you need to be aware that 99.9% of people doing PHP or Java or the JVM do not have what it takes to make anything that may see real attacks secure. People that can secure things in this particular problem space are exceedingly rare and exceedingly expensive. One problem is that you cannot use most/all libraries for security critical functions, and may well have to augment the JVM via JNI for secure input validation. Most Java folks are not capable of doing that at all.

Mostly common sense but still good reminders (3, Insightful)

drkstr1 (2072368) | about 3 months ago | (#47786469)

Here it is for anyone who didn't bother to RTFA

1. Earn or Give, but Never Assume, Trust
2. Use an Authentication Mechanism that Cannot be Bypassed or Tampered With
3. Authorize after You Authenticate
4. Strictly Separate Data and Control Instructions, and Never Process Control Instructions Received from Untrusted Sources
5. Define an Approach that Ensures all Data are Explicitly Validated
6. Use Cryptography Correctly
7. Identify Sensitive Data and How They Should Be Handled
8. Always Consider the Users
9. Understand How Integrating External Components Changes Your Attack Surface
10. Be Flexible When Considering Future Changes to Objects and Actors

Number 5 (1)

Rob Fielding (3524407) | about 3 months ago | (#47787509)

Number 5 is the most important. It is about defending against bad input. When an object (some collection of functions and a mutable state) has a method invoked, the preconditions must be met, including message validation and current state. A lot of code has no well defined interfaces (global states). Some code has state isolated behind functions, but no documented (let alone enforced) preconditions. The recommendation implies a common practice in strongly typed languages: stop using raw ints and strings. Consume input to construct types whose existence proves that they passed validation (ex: a type "@NotNull PositiveEvenInteger" as an argument to a function, etc). DependentTypes (types that depend on values) and DesignByContract are related concepts. With strong enough preconditions, illegal calling sequences can be rejected by the compiler and runtime as well. If secure code is ever going to start being produced on a large scale, people have to get serious about using languages that can express and enforce logical consistency.

Re:Number 5 (1)

gweihir (88907) | about 3 months ago | (#47787853)

Sorry, but no. For example, one of the most important threats these days in the banking industry is data leakage. No amount of input data validation is going to help one bit there. These aspects are all critical. Mess up one, and all is lost. That is what makes software security so difficult: You have to master the whole problem space before you can produce good solutions. Incidentally, there are rules "11: Always consider the business case" and "12: Do a conclusive risk and exposure-analysis and rate and document your findings" which are the make-or-break aspects and it are completely missing from the list.

Re:Number 5 (1)

Rob Fielding (3524407) | about 3 months ago | (#47806675)

By what technical means do you prevent data leakage though? You need to specify what the system (and its users) will NOT do. Defending against bad input (and bad state transitions) is the foundation for everything else, because there is no technical means of enforcing any other security property otherwise. The game of the attacker is to reach and exploit states that are undefined or explicitly forbidden. Think of the Heartbleed bug as an example of a failure in 5 mooting 6. Bad input causes a web server to cough up arbitrary secrets, which is used to violate every other security constraint. For 5 mooting everything, including data leakage protections: SQL injections can be used to extract sensitive data out of web sites (ie: SilkRoad user lists presented back to the administrator with ransom demands). I work on a data leakage protection system, and it's based on earlier intrusion detection and prevention systems for a reason. I regard Intrusion Detection and Intrusion Prevention systems as essentially trying to force a fix of number 5 over a zoo of applications that didn't get it right; amounting to taking action on a connection that looks like it's not safely following protocol.

Re:Mostly common sense but still good reminders (0)

Anonymous Coward | about 3 months ago | (#47788943)

11. Don't assume you only need validation in client-side Javascript.

It could be considered a combination of 1, 4 and 5, but I've seen plenty of sites that only perform validation with client-side Javascript and are then left scratching their heads when "bad data" gets through.

Customer: "But this is a Javascript app, Javascript must be working!"

Me: "Here, meet Greasemonkey. See, I can disable your validation and completely usurp it."

Meh. (0)

Anonymous Coward | about 3 months ago | (#47786557)

TOC in the wrong order, and a paper size that isn't available in 95% of the world. How annoying.

Beyond that, not too inspired. But then, if this really is news for a great many programmers Out There, then perhaps the profession is just uninspired--despite what it likes to believe.

hahahahahaha (-1)

Anonymous Coward | about 3 months ago | (#47786717)

Joint Chiefs of Staff will never allow anything popular to be "secure". Information is power and they want to control population. So the plebs can never have secure computers. Because that would be equal to a SIGABA machine in every household. HORROR !.

Just look at the TLS standard: So complex they can easily fuck it up.

And you betcha, they have fucked up all major CPUs and UARTs and ethernet chips.

All just THEATER. Grow roses or raise geese. That's what NSA does, too.

An excuse for walled gardens and OnLive (2)

tepples (727027) | about 3 months ago | (#47786759)

I read the featured article, and I see ways that publishers could misuse some of the recommendations as excuses for profit-grabbing practices that plenty of Slashdot users would detest.

For example, some organizations will claim a real business need to store intellectual property or other sensitive material on the client. The first consideration is to confirm that sensitive material really does need to be stored on the client.

Video game publishers might take this as an excuse to shift to OnLive-style remote video gaming, where the game runs entirely on the server, and the client just sends keypresses and mouse movements and receives video and audio.

watermark IP

I'm not sure how binary code and assets for a proprietary computer program could be watermarked without needing to separately digitally sign each copy.

Authentication via a cookie stored on a browser client may be sufficient for some resources; stronger forms of authentication (e.g., a two-factor method) should be used for more sensitive functions, such as resetting a password.

For small web sites that don't store financial or health information, I don't see how this can be made affordable. Two-factor typically incurs a cost to ship the client device to clients. Even if you as a developer can assume that the end user already has a mobile phone and pays for service, there's still a cost for you to send text messages and a cost for your users to receive them, especially in the United States market where not all plans include unlimited incoming texts.

a system that has an authentication mechanism, but allows a user to access the service by navigating directly to an “obscure” URL (such as a URL that is not directly linked to in a user interface, or that is simply otherwise “unknown” because a developer has not widely published it) within the service without also requiring an authentication credential, is vulnerable to authentication bypass.

How is disclosure of such a URL any different from disclosure of a password? One could achieve the same objective by changing the URL periodically.

For example, memory access permissions can be used to mark memory that contains only data as non-executable and to mark memory where code is stored as executable, but immutable, at runtime.

This is W^X. But to what extent is it advisable to take this principle as far as iOS takes it, where an application can never flip a page from writable to executable? This policy blocks applications from implementing any sort of JIT compilation, which can limit the runtime performance of a domain-specific language.

Key management mistakes are common, and include hard-coding keys into software (often observed in embedded devices and application software)

What's the practical alternative to hard-coding a key without needing to separately digitally sign each copy of a program?

Default configurations that are “open” (that is, default configurations that allow access to the system or data while the system is being configured or on the first run) assume that the first user is sophisticated enough to understand that other protections must be in place while the system is configured. Assumptions about the sophistication or security knowledge of users are bound to be incorrect some percentage of the time.

If the owner of a machine isn't sophisticated enough to administer it, who is? The owner of a computing platform might use this as an excuse to implement a walled garden.

On the other hand, it might be preferable not to give the user a choice at all; or example if a default secure choice does not have any material disadvantage over any other; if the choice is in a domain that the user is unlikely to be able to reason about;

A "material disadvantage" from the point of view of a platform's publisher may differ from that from the point of view of the platform's users. Another potential walled garden excuse.

Designers must also consider the implications of user fatigue (for example, the implications of having a user click “OK” every time an application needs a specific permission) and try to design a system that avoids user fatigue while also providing the desired level of security and privacy to the user.

Google tried this with Android by listing all of an application's permissions up front at application installation time. The result was that some end users ended up with no acceptable applications because all applications in a class requested unacceptable permissions.

A more complex example of these inherent tensions would be the need to make security simple enough for typical users while also giving sophisticated or administrative users the control that they require.

That or an application or platform publisher might just punt on serving sophisticated users.

Validate the provenance and integrity of the external component by means of cryptographically trusted hashes and signatures, code signing artifacts, and verification of the downloaded source.

This too could be misinterpreted as a walled garden excuse when a platform owner treats applications as "external components" in this manner.

Re:An excuse for walled gardens and OnLive (1)

kwbauer (1677400) | about 3 months ago | (#47787859)

"How is disclosure of such a URL any different from disclosure of a password? One could achieve the same objective by changing the URL periodically." I believe the article is saying that you don't just blindly allow the use of URLs without verifying that the caller is within an authenticated session. This has nothing to do with changing passwords.

"Google tried this with Android by listing all of an application's permissions up front at application installation time. The result was that some end users ended up with no acceptable applications because all applications in a class requested unacceptable permissions." So, a group of people denied access to every app because they thought the app had too much access to their data and that group of people had no usable apps and this is somehow the fault of everybody except that small group of people? Look, I get that some people don't want some apps being able to do certain things. But if you don't want any app to do anything, why do you have a device capable of running apps?

"That or an application or platform publisher might just punt on serving sophisticated users." Well, no reasonable person expects that every software publisher will meet the needs of every person on the planet. Those users who are "too sophisticated" may need to write their own software or find equally sophisticated developers to write it for them.

"This too could be misinterpreted as a walled garden excuse when a platform owner treats applications as "external components" in this manner." Or it could be correctly interpreted as "use digital signatures to verify senders and that the message has not been tampered with." In this context, a downloaded binary is treated the same as a message before the binary is used.

Two problems with Android app permissions (1)

tepples (727027) | about 3 months ago | (#47788215)

I believe the article is saying that you don't just blindly allow the use of URLs without verifying that the caller is within an authenticated session. This has nothing to do with changing passwords.

A newly installed web application has to create a first authenticated session that lets the founder set his own password (or set his own e-mail address in order to recover his password) and grant himself founder privileges. The URL of this first session is effectively a password (or more properly a substitute for a password), though I'll grant that it should be disabled through other means most of the time.

But if you don't want any app to do anything, why do you have a device capable of running apps?

I see at least two problems.

The first is that Android's permissions are far too coarse-grained. SD card permissions don't have separate settings for "read and write the app's own folder and folders explicitly chosen by the user" and "read and write the whole damn thing". Internet permissions don't have separate settings for "communicate only with a specific set of hostnames" and "communicate with everything". Phone state permissions don't have separate settings for "read whether the phone is ringing as a cue to pause the game or video and save the user's work immediately" and "read the identity of the cellular subscriber whose SIM is in this device".

The other problem is that unlike (say) Bitfrost in OLPC Sugar, Android's model isn't designed for users to be able to turn permissions on and off. A user must either grant all privileges that an application requests or not install the application at all. For example, a keyboard app might be able to read the user's location and contacts, ostensibly for adding nearby landmarks and friends' names to the autocorrect. But a privacy-conscious user has no technical means of preventing the application from misusing those permissions. Android 4.3 experimented with "App Ops", an app on Google Play Store to disable individual permissions of individual applications, but Google did away with that in Android 4.4 because it caused too many applications to crash on an uncaught SecurityException.

Those users who are "too sophisticated" may need to write their own software

Until the device blocks sophisticated users from running their own software. This is where the walled garden concept comes in.

Or it could be correctly interpreted as "use digital signatures to verify senders and that the message has not been tampered with."

I understand how you might see a non sequitur, so let me connect the dots. Verifying a sender is only authentication. According to the article, authentication should always be followed by authorization, a decision as to whether or not the system should trust software from a particular sender. A platform owner could play up its strong authentication and gloss over the inflexible authorization policy that follows it. And "inflexible authorization policy" is another word for a walled garden.

This initiative is futile (1)

gweihir (88907) | about 3 months ago | (#47787773)

While the brochure referenced is nice, anybody that needs it has zero business building anything security-critical. It does take a lot of experience and insights to apply the described things in practice in a way that is reliable, efficient and secure and respects business aspects and the user. Personally, I have more than 20 years of experience with software security and crypto, and looking back, I think I became a competent user, designer and architect only after 10 years on this way. The problem here is that as software security is very hard, a specialized form of the Dunning-Kruger effect applies. The things I have seen people do that though they understood software security are staggering. Unless you have achieved a holistic view of the problem-space, do not even try to design any security critical software.

Re:This initiative is futile (1)

presidenteloco (659168) | about 3 months ago | (#47789545)

I'd say security failure is partly due to incentive alignment failure for developers.

Bad security design is a problem that's going to bite, but usually a little later, after version 1 is out the door and everyone's paid.

Not meeting the pretty much arbitrary and insanely optimistic delivery schedule is going to bite developers right now.

Corners will be cut, even if some of the developers know what SHOULD be done.

In general, almost every architectural aspect of software, including security, (well-factoredness, maintainabilty, scalability, extensiblity, low-coupling, you name it) is hidden, except to a few experts who aren't usually those in decision-making roles. That's why so much software delivered is a Potemkin village.

Re:This initiative is futile (1)

gweihir (88907) | about 3 months ago | (#47790957)

While that certainly plays a role, it is a minor one. It does stand in the way of solving things, but if you do not have developers that can do secure software engineering competently (and that is the normal case), then giving them too little time and money to do secure software engineering does not matter. The other thing is that people that actually understand software security are much less likely to declare something finished or secure than those with only a superficial understanding of things. Software security really is an additional, and exceedingly hard to obtain, qualification. That most "programmers" these days struggle even with simple things (see http://blog.codinghorror.com/t... [codinghorror.com] , for example) is not the root cause.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?