Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Exploiting Software

timothy posted more than 10 years ago | from the with-titanium-crowbars dept.

Security 148

prostoalex writes "Why are networked computing environments so insecure? You've heard the story before - early computers were not designed to work in the network environment, and even most software written later was designed to work on benevolent networks. As Bruce Schneier says in the preface to Building Secure Software: How to Break Code, 'We wouldn't have to spend so much time, money and effort on network security if we didn't have such bad software security.'" Read on for prostoalex's review of Exploiting Software, which aims to balance that situation somewhat.

What kind of secure are you after?

There are many published titles on the topic of software security are numerous, but most of them follow certain patterns. Building Secure Software by Viega and McGraw was mainly concerned with proper techniques and general software engineering mindset without going into specifics. Then there was Writing Secure Code , by Howard and LeBlanc, which provided concrete examples and showed the "right way" to do secure coding. I heard the title instantly became a required reading at world's largest software corporation. It's currently in its second edition.

Secure Programming Cookbook for C/C++ by Viega and Messier, was the hands-on title for those developing C/C++ application with security in mind, as the cookbook recipes generally gave examples of good code, with each chapter providing some general background information on the topic discussed (I reviewed it on Slashdot in September last year).

Just in case you were wondering, the list above wasn't just retrieved by a quick search at Amazon. My Master's degree, completed last summer, dealt with the topic of software security, and those are the titles I've read preparing to write the theoretical part.

From the other side

With the variety of books on how to write secure software, and what techniques to use to make existing software more secure, there was a niche for a book targeted specifically to those who wanted to break software. Black hat or white hat, the network security experts always had titles like Hacking Exposed to give them an idea of what was available in terms of techniques and methodologies used out there. For software security most of the articles and books generally would tell you something in the terms "do not use strcpy(), as it introduces buffer overruns".

Great, so I won't use strcpy(), did it make my application more secure? Is it more or less hack-proof? What if I am a tester and required to play with this aspect of the application to ensure the application's security before the product ships? Theoretically hanging out at proper IRC rooms and getting lifetime Phrack and 2600 subscriptions should be enough to cover you at the beginning, however, the learning curve here leaves much to be desired, let alone the fact you will probably be kicked out of the IRC rooms for asking n00b questions. Another path would be to take an expensive training course by someone with a name in the industry, but the price tag for those generally leaves out self-learners and those operating on limited budgets, which adds up to about 99% of software engineers and testers out there.

Exploiting Software to the rescue.

Exploiting Software fills the void that existed in this market. Eight chapters take you through the basics and some advanced techniques of attacking software applications with the purpose of executing arbitrary code supplied by an attacker (you).

The book mainly deals with Windows applications for x86 platforms, and some knowledge of C/C++ and Win32 API is required to go through the example applications. To automate some processes and demonstrate possible attacks the authors use Perl, so knowledge of that would help the reader, too. Some chapters, (e.g. the buffer overflow one) show disassembler output, and while you're not expected to read x86 ASM code as if it were English, knowledge of how the registers work and how the subprocedure calls are handled on this Intel architecture are required. After all, if potential attackers know it, you better familiarize yourself with some low-level code, too.

While discussing various possible attacks, the authors post different attack patterns. The patterns themselves usually appear in gray textboxes and talk about the possible exploit in general terms. After that, a series of attack examples follow, with specific descriptions on what can be done, and how. For example, the attack pattern on page 165 is titled "Leverage executable code in non-executable files." The following attack example is "Executable fonts," and it talks how the font files are generally treated by the Windows systems (they are a special form of DLLs). Thus it's possible to embed some executable code into a font library you're creating, for which the authors provide an example in Microsoft Visual Studio.

What's cool is that all the attack patterns are listed in a separate table of contents (alas, not on the Web site table of contents, which just lists the chapters and subchapters), so you can browse to the attack pattern you decide to learn about, read some general info about it and then study specific examples. The examples themselves are not in the table of contents, which I think is a mistake, as it would make searching for possible patterns much easier. After all, how are you supposed to know that "Informix database file system" (p. 189) is under "Relative path traversal" pattern? Well, unless you know specifically that the line http://[Informix database host]/ifx/?LO=../../../etc/ is the one discussed in the example, you would have to either go through the index hoping no omissions were made, or read the chapter in its entirety.

One of the best chapters of the book, Reverse Engineering and Program Understanding, which provides a good introduction into techniques used throughout the book, is available online from Addison Wesley. By having a free chapter you already have 1/8th of the book, but don't think that the low number of chapters makes this 512-page title an introductory book.

Target Audience

Looks like there are two major audiences and reading patterns for this book: those wanting to fix their systems ASAP and thus using Exploiting Software as a reference, and those using it as a text book to learn about security. I've discussed the organization of the book above, and the reference types will probably be more interested in patterns and examples. For a casual reader (although casual readers wouldn't generally pick up a title with C++, Perl, ASM and hex dumps spread around the chapters) this is a book with great educational value, from two authors who have discovered numerous security vulnerabilities themselves.

Exploiting Software is not an easy title to read. Addison-Wesley shipped me the manuscript copy a month before it hit the bookshelves in its final version, and I found myself going through about two pages an hour. The authors bring up sometimes unfamiliar Win32 APIs and occasionally use ready-made tools available on the Web, so generally I found myself visiting MSDN and Google a lot to read through available documentation and download the latest version of the tools used. The book doesn't come with a CD. Some of the stuff, like inserting a malicious BGP packet to exploit a Cisco router (p. 281) is not really testable at home, and I have some reservations about verifying the example with my employer's routers.

The book is probably apt for 2nd or 3rd year computer science students and above. Besides the variety of languages that I mentioned above, you need to be familiar with the basics of Intel architecture, and generally be fluent with terminology like "buffer," "stack," "syscall," "rootkit," etc., as this is not an "Introduction to..." title. From my experience, you probably won't read it from page 1 to page 512 understanding everything perfectly, but for anyone interested in security and those making a career in software development it looks like a bookshelf must-have.

I interviewed Gary McGraw on the current state of software security, the relevance of the topic to the issues beyond C/C++ and improper buffer usage, and future directions in security. Network World magazine also ran an interview with the McGraw in which he talks about the reception of the book at the RSA Conference, whether the economics is right to invest in building secure systems, and whether his book does more harm by providing a compendium of known exploits.


Alex has written numerous reviews of other software and security titles. You can read more of his opinions at his Web site. You can purchase Exploiting Software: How to Break Code from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

cancel ×

148 comments

Sorry! There are no comments related to the filter you selected.

Don't forget... (-1)

SCO$699FeeTroll (695565) | more than 10 years ago | (#8570702)

...to pay your $699 licensing fee you cock-smoking teabaggers.

Re:Don't forget... (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570784)

This just in... the user "GneccoBoy" at The Network Ministry of Disinformation [thenetworkband.com] is a supporter of SCO.

something else too (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570857)

I just used a public toilet and there was shit on the fucking seat! I sat down in someone elses fucking shit!!

Re:something else too (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570890)

Well, you shouldn't be reading Slashdot while you're looking for somewhere to sit...

CLOSE YOUR I TAG, YOU FUCKING DOUCHE, TIMOTHY (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570732)

***Here are some of the imdb.com reviews for "Gay Niggers From Outer Space":

Summary: The best homosexual racial minority sci-fi film ever.

"Morten Lindbergs classic cult short, Gay Niggers From Outer Space is one of
the first short films to really stick to what the title suggests. From the
time the first gay nigger walked onto the screen up until the final intense
climax with the Tourette's Syndrome Kingdom in Outer Space, it's filled with
dark comedy, action and plenty of suspense. "

"Gay Niggers from Outer Space is a masterpiece of a film. No other film
portraits emotions as majestically and stunningly since The Legend of Nigger
Charley and Home Alone II. With a cast of all-star African niggers and a
director with Kubrick potential, it is no wonder that Gay Niggers from Outer
Space is marked the greatest film of all time."

"From the very first scene where Gay Nigger Harris throws up on his own face
and commits suicide, to the climactic scene where Nigger Ralph Nader and
Nigger Humphrey Bogart fight over the last hashbrown and pick cotton til
their noses bleed, Gay Niggers from Outer Space is the most magical
portrayal of gay niggers open to the public."

***However, no mention is made of the hazadous lifestyle of gay niggers,
so the following is an attempt to explain those hazards in layman's terms:

Despite cries to the contrary in the media, AIDS is still primarily a gay
and black disease. The media loves to report the "growing epidemic" among
whites, when in fact the rate of infection among heterosexual whites is
dropping off significantly year by year. The media though, reports only the
TOTAL current infection rate, not the RELATIVE. So while there are more
cases each year, the RATE of infection is dropping quickly. Except for the
gay/nigger communities, where it's skyrocketing.

Why does AIDS seem to target gays and niggers so much more so than whites
and straights? Anal sex. The anus was not designed to accommodate vigorous
penetration as occurs in anal sex. Unlike the vagina, the anus has very
delicate membranes, which damage easily. Couple that with the fact that
sperm contains immune system suppressing chemicals. That's why the sperm is
not treated as a foreign protein in the vagina...because of the immune
suppressing effects of the sperm cells. Without this effect, pregnancy
could not occur, as the sperm would be attacked as a foreign protein.

In the anus, sperm has the same immune suppressing effect. During anal sex,
the anal wall is torn and open lesions form. Because there is little if any
sensory nerve endings in the anus, this damage often goes unnoticed. The
sperm then induce their immune suppressing effect, and the stage is set.
Various bacteria both beneficial and infectious dwell in the colon, as well
as viral matter. When the anus is ripped open, exposing the blood to the
immune suppressing chemicals in the sperm, and the viral matter passed
along with it, infection is virtually assured.

***So does the skyrocketing rate of AIDS infection mean that there are
skyrocketing rates of gay niggers???

***Not exactly, because most White people don't realize that a large
percentage of nigger males are bisexual. It's a great irony considering all
of their macho posturing and affectations. They tend to admire the male
physique, and when no women are present, they will hip-hop dance with each
other. Any port in a storm will do, because da' brotha's just gots ta
have it!!! Then they pass along the virus to their wives, girlfriends, and
family members.

***Here is a story about this phenomenon from "The Village Voice":

http://www.villagevoice.com/issues/0123/wright.p hp

And for the Toronto Gay Niggers:

MOD PARENT UP! (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570755)


HTML Error (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570733)

Please close your Italics tag.

I used to work with the author Gary McGraw (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8570735)

He is a really brillant and interesting guy, but he had a really odd habit of picking his nose and eating the booger. Pretty nasty to watch!

Who modded this informative? (-1, Offtopic)

bad enema (745446) | more than 10 years ago | (#8570801)

Hello? An AC claiming to work with the author? First post?

Quel naive.

Who modded this troll? (0)

Anonymous Coward | more than 10 years ago | (#8571310)

Some boogers clearly appear in print in a few places in Gary McGraw's older books. Anyone can go to the library and check for themselves.

teh spoke? (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570738)

rimjob on teh spoke? what does that mean exactly? anyone? help?

THIS JUST IN (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570739)

+-+-+-+ +-+-+-+ +-+-+-+
|Y|o|u| |A|r|e| |G|a|y|
+-+-+-+ +-+-+-+ +-+-+-+

But does it cover... (5, Funny)

MalaclypseTheYounger (726934) | more than 10 years ago | (#8570740)

Stupidity? Security is easy. Making software stupid-proof is hard.

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

-- Douglas Adams

Re:But does it cover... (5, Insightful)

IO ERROR (128968) | more than 10 years ago | (#8570822)

This is where input validation comes in. Check every input value for sanity. Do something reasonable if the value isn't sane. How often have you forgotten to write error checking or input validation code? Do you check the return value from printf()? (yes, it has one) Every time? (I doubt it)

Writing bulletproof software is TEDIOUS. You still have to verify everything, and still somebody's going to find the one thing you missed and exploit the hell out of it...

Writing is bad enough, testing is worse (5, Informative)

RLW (662014) | more than 10 years ago | (#8571014)

It is true that writing bulletproof software is TEDIOUS. However, after watching our test staff I have determined that testing it is beyond tedious. It is mind numbing!

The company I work for makes software for wafer fabrication. If the software fails, no one dies but millions of dollars of materials and time can be lost: we spend a lot of time testing and retesting and verifying our tests and going over scenarios to make sure we get it right. Event with all that, over the last couple of years we've logged over 600+ defects of various types. All the way from a misspelled word on an error message to miss-processed data to crashes. Most of those errors are detected in house and the customer doesn't see them: but I would guess we get 4 to 8 defects reported from customers every month with most being minor but a few are so egregiously bad that should have been impossible for that to make it through testing. But even a small error can cost tens of thousands of dollars or more in losses so no bug is a good bug.
Most other places I've worked have testing in name only. The code is compiled and run for a few seconds: some edit boxes are typed into and the mouse is wiggled around and that's it.

If architects make buildings like programmers write code then every woodpecker that comes along destroys civilization.

Re:Writing is bad enough, testing is worse (5, Insightful)

Greyfox (87712) | more than 10 years ago | (#8571400)

I did security auditing in a standard C library in a previous job. We wrote customized automated test for every freaking C library function. Not only did we document potential side effects from each one of those functions, we could run the entire test suite whenever modifications were made to the library to insure that everything still worked as expected. That job was a real eye-opener, let me tell you...

Little things can make a big difference too. Let me give you a hypothetical; Lets say the AIX standard C library strlen() tests its input to make sure it's not a NULL pointer prior to testing the string. Lets further say that the Gnu C library doesn't make this test. Recompiling your AUX application on Linux would potentially introduce crashes whenever your application encrounters a strlen.

While the above was a hypothetical situation, I have uncovered a good many memory overflows and leaks simply by compiling and running an application on a different flavor of UNIX than it was originally written for. Having safe underlying library calls is nice, but it also introduces the possibility that actual errors will go unnoticed for a longer period of time.

I'm pretty well convinced that in a situation where the need for security is high (Say, for example, an OS kernel) documented testing of every single function that makes up the software is a necessity.

Re:Writing is bad enough, testing is worse (3, Interesting)

Tony-A (29931) | more than 10 years ago | (#8571646)

If architects make buildings like programmers write code then every woodpecker that comes along destroys civilization.

And the architects' answer would be to woodpecker-proof every stick of wood???

If you need to test every case, check every return code, etc, etc, there is something very bad, very fundamentally bad, with the design of the whole mess.

With no testing or checking whatever in the program, it should be impossible to create a disaster all out of proportion to the cause.

Re:But does it cover... (3, Insightful)

brandonY (575282) | more than 10 years ago | (#8571115)

This is why tools like lint exist. Alongside about 1000 other useful things, lint tells you if you ever fail to check the return value of a function call. Sure, it's tedius to always check the return of printf, but it's necessary and to some extent it can be automated.

Re:But does it cover... (1)

cras (91254) | more than 10 years ago | (#8571217)

This is where input validation comes in. Check every input value for sanity.
IMHO this is completely wrong way to do it. Write your code so that you don't have to check input for sanity - your code should be secure no matter what the input is. What input validation is useful is giving user meaningful error messages.
Writing bulletproof software is TEDIOUS. You still have to verify everything, and still somebody's going to find the one thing you missed and exploit the hell out of it...
Yes, like you say here. You're going to miss something if you rely on input validation to catch the security holes.

More ranting about the subject.. [iki.fi]

Re:But does it cover... (3, Interesting)

johnnyb (4816) | more than 10 years ago | (#8571828)

This is true. I've also found that if you over-validate the _form_ of input, then the customer service reps and business entities are going to assume that the _value_ of the input is also correct.

For example, in places where I have validated that phone numbers follow a proper form, the customer service reps assumed that the phone numbers were definitely correct - becuase they were all properly formulated.

It's easier on them if they get a few orders where the phone number is just plain wrong so that they know that this is a USER-generated field and that USERs can lie as much as they want to. It makes it obvious just who is responsible for verifying the validity of the information presented. If you make sure the _form_ is fully prim and proper, people are going to assume that you verified the content as well.

Re:But does it cover... (2, Funny)

Hiro Antagonist (310179) | more than 10 years ago | (#8571965)

Er, how would you report an error to the user if printf() failed?

Re:But does it cover... (5, Informative)

robslimo (587196) | more than 10 years ago | (#8570842)

I agree that creating stupid-proof is difficult, but I'm not so quick to blow off the task of making software secure. One methodology I've not heard much talk about is testing with external software that tries to break the application (though there is at least one company who makes a such a product, eeye.com, I think). Just like any other aspect of ensuring robustness in software, testing is the next critical step after design.

So design it to be secure, then test it! Try to overrun buffers, throw illegal arguments and all around garbage input at it, some specifically designed and even more that are randomized. When it crashes or misbehaves, find out what trashed it and fix it up.

Design, test, test, test, test, test!

Re:But does it cover... (5, Interesting)

dilettante (91064) | more than 10 years ago | (#8570884)

Making software stupid-proof is also expensive.

Most programmers that i know who have an interest in security have already set up Windows or Linux boxes with known vulnerabilities and attempted to duplicate known exploits. But knowledge is, at best, half of the picture. In most commercial software endeavors, what's really important is convincing everyone along the chain of command that security is important. This seems like it should be simple, but security is one of those horizontal aspects of software development that adds considerably to the expense of system development, but doesn't add many new features.

Further complicating the problem is that even if someone were to develop an environment that attempted to prevent all of the problems caused by programmer errors, it would be horrendously complex and would likely kill performance. Until threat modeling and code inspections with an eye on security become commonplace, this problem will persist.

Re:But does it cover... (3, Insightful)

John Whitley (6067) | more than 10 years ago | (#8571518)

Further complicating the problem is that even if someone were to develop an environment that attempted to prevent all of the problems caused by programmer errors, it would be horrendously complex and would likely kill performance.

IMO, a big part of the solution is factoring out solutions for major known security problem areas into the environments, languages, and frameworks that developers use on a day-to-day basis. E.g. if you're using a language with robust automatic memory management, there's little reason to go looking for C-style buffer overflow exploits coded by your developers.

In today's environments (e.g. Windows and current *nix systems) with current popular languages (e.g. C, C++) we're at a big disadvantage. Much of the discussion in this thread presumes that coders can/should amass total knowledge of all levels of security exploits, from binary code injection to cross-site scripting (aka CSS), SQL injection, etc. It becomes overwhelming to a dev who really should be able to focus on the value-added problems at hand. I'm aware of only one cost efficient approach: choose environments, languages, and/or tools that mitigate known security risks.

Where applicable, this can be done by leveraging environments that can limit the scope of attacks. See SELinux [nsa.gov] and GR Security [grsecurity.net] for ways to patch Linux to meet thess needs, or the EROS project for a fresh view of OS security and compartmentalization models. Environment choise is most relevant to folks providing networked services, where they can control the platform specifics.

The cause can also be aided by using languages/frameworks that encapsulate security knowledge. This can be as "simple" as using a language with automatic memory management(to factor out common buffer overflow problems), or along the lines of using scripting frameworks that standardize policies for correctly managing more complex security issues (e.g. cookie management, web input/output validation, CSS issues, etc).

I'd argue that it is possible to improve software security practices significantly simply by careful choices of tools and techniques available today. But it takes a saavy organization to really commit to providing secure software solutions, and to be able to do so in a cost effective manner. As always, the hard part of the equation is programming the wetware. 8-)

Re:But does it cover... (4, Insightful)

SphericalCrusher (739397) | more than 10 years ago | (#8570928)

Exactly. Even though I may pick this book up for a good read, I can already say that a good 50% of hacking is not technical.

The social engineer shows just how easy it is to obtain information from someone than it is to actually copy it from their computer. Just by dressing proper and knowing the correct lingo, you could easily masquarade as an employee for the company.

Read The Art of Deception, by Kevin Mitnick. Great read indeed.

Re:But does it cover... (3, Funny)

corbettw (214229) | more than 10 years ago | (#8570951)

Does anyone else find it ironic that a review of a book regarding software security, which should include such topics as how to check for proper input, did not have a closing italics tag? Not that anyone's gonna hijack a site that doesn't have proper text formatting, but I think it's a glaring example that programmers are human, and human error is the root of all security breaches.

Re:But does it cover... (1)

Bombcar (16057) | more than 10 years ago | (#8571078)

I hereby release the ultime software security patch under the GPL.


#!/bin/bash
cat $1 > /dev/null
rm $1 -f


After that, there'll be no security issues in the software!

Exploited already!! (1)

johnnyb (4816) | more than 10 years ago | (#8571902)

I have an exploit:

rm -f /dev/null
touch /dev/null

Re:But does it cover... (1)

cptgrudge (177113) | more than 10 years ago | (#8571980)

Ah, yes, but that begs the question, if there is no software, did the security holes actually exist?

This is similar to the classic quote:

'If a tree falls in a forest, and no one's around to hear it, and it falls on a mime, does anyone care?' - Gary Larson, The Far Side

Whoop dee fuckin' doo (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570741)

...and would this be useful for the newbie coders? (5, Interesting)

Penguinisto (415985) | more than 10 years ago | (#8570749)

The one question that sticks out is yes, learning how something is exploited will help out a guy who already knows how to code (and has enough experience to look at an exploit from both sides of the attack), but what about the newbie coder who, more than anyone else, desperately needs to know how to code properly in the first place?

Someone ought to combine a guide for writing secure code, but with exploit examples that dissect it from both crack and code perspective. (does this book do that, perhaps? Maybe I missed it, but I didn;t see any indication of that in there...)

Re:...and would this be useful for the newbie code (4, Interesting)

kneecarrot (646291) | more than 10 years ago | (#8570776)

I would hope that no one lets a newbie coder get his grimy little paws anywhere near code that requires a careful consideration of security.

Note: I did not post this anonymously so I MUST NOT be a troll.

Re:...and would this be useful for the newbie code (3, Funny)

Anonymous Coward | more than 10 years ago | (#8570878)

Yeah, we put all the new guys on security.

Re:...and would this be useful for the newbie code (4, Insightful)

Ytsejam-03 (720340) | more than 10 years ago | (#8570892)

I would hope that no one lets a newbie coder get his grimy little paws anywhere near code that requires a careful consideration of security.

Everyone writing code should be giving careful consideration to security. In my experience few developers do, but that number is increasing...

Good Reading (4, Interesting)

Anonymous Coward | more than 10 years ago | (#8570756)

While I've only read the sample chapter given out at the recent RSA conference, I found it to be extrememly interesting. Useful for those looking for ways to secure consulting gigs fixing blatant security flaws in common software (especially web apps)

Just what we needs... (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8570761)

more examples for the kiddies!!

Re:Just what we needs... (1)

TobiasSodergren (470677) | more than 10 years ago | (#8571026)

My _guess_, since I haven't actually read the book, is that it discusses common security practises rather than strange bugs in specific software. I also guess that these security issues are already known among evil-doers.

Or, I just failed to get the joke :)

poetry in motion (5, Insightful)

tomstdenis (446163) | more than 10 years ago | (#8570765)

Why I love Bruce...

"We wouldn't have to spend so much time, money and effort on network security if we didn't have such bad software security."

Is to smart as

"We wouldn't have some many crumbling roads if heavy vehicles didn't drive on them"

Is to insightful. I still say the best way to experience Bruce's mind in action is in person. In his books he's trying to pander to the market [of let's face it less than apt people] and in person he's talking with fairly brilliant people [e.g. me ;-)]

Tom

Re:poetry in motion (4, Insightful)

Anonymous Coward | more than 10 years ago | (#8570867)

Looks like someone flunked the analogy sectin on the SATs. Actually what he said is more like,

"We wouldn't have to spend so much money fixing roads if we would just build more resiliant roads in the first place"

Which is perfectly true. Sure it's not groundbreaking, but then it's not meant to be. The difficult job for quality insurance people is just to make people like you shut up and actually change the problem behaviors.

Re:poetry in motion (2, Interesting)

sammy baby (14909) | more than 10 years ago | (#8571180)

Better, but let me try a third analogy:

"We wouldn't have to spend so much money making our roads perfect if we weren't all driving around on bald tires."

That gets, I think, to the heart of the issue - that people blame the road for problems that might be better addressed by looking at the cars.

Re:poetry in motion (4, Insightful)

Tjebbe (36955) | more than 10 years ago | (#8571428)

And here we see that analogies, like code, are hard to get right the first time (or the second).

Re:poetry in motion (1)

sammy baby (14909) | more than 10 years ago | (#8571491)

Brilliant. :)

Re:poetry in motion (0)

Anonymous Coward | more than 10 years ago | (#8570885)

Yeah, you're a regular genius. Network security and software security are not the same things. Re-read your favorite quote, slap your forehead, go "Oh!" and then get on with your life.

Re:poetry in motion (3, Insightful)

tomstdenis (446163) | more than 10 years ago | (#8571211)

You missed my point [usual for an AC]. The point is Bruce is 99% mouthpiece. People quote him because he knows how to use webmail [I've seen him use it personally, it was fascinating], has long hair and says things like "it's the ramifications of the draconian backbone we are all founded on."

Seriously though... in person he's puts on a good show, has a sense of humour and more importantly knows when to turn off the media filter.

Tom

Re:poetry in motion (1)

IMarvinTPA (104941) | more than 10 years ago | (#8570887)

"We wouldn't have some many crumbling roads if heavy vehicles didn't drive on them"

I don't quite agree with that. It is more like: "We wouldn't have so many crumbling roads if large, heavy, spiked/square tire vehicles running on afterburner didn't drive on them."

IMarv

Re:poetry in motion (3, Insightful)

GPLDAN (732269) | more than 10 years ago | (#8571036)

Schnier seems to have made a career out of stating obvious truisms, like "all security is a tradeoff." I mean, I've read his books. Are these books really considered insightful?

I don't understand the appeal. The Myth of Homeland Security by Marcus ranum is 100x the book that Schnier's is. Ranum actually worked in Washington, he not only shows where security breaks down, but the why of the politics behind it. In 100 years, professors who wish to study Computer Security at this stage of history will put Ranum's book on the syllabus and nobody will remember Bruce.

I guess this all goes back to "Applied Cryptography". A book full of code showing how to code the encryption algorithms widely known about. Compiled from the Internet. It may have made a stink when it was published, but what's worse is Schnier rode the publicity all the way.

Re:poetry in motion (3, Insightful)

Beryllium Sphere(tm) (193358) | more than 10 years ago | (#8571357)

>Schnier seems to have made a career out of stating obvious truisms

Evidently they're not obvious to everyone. If you've been through an airport in the last couple of years, or used a mass-market network-enabled software product, or looked at the security advice given by newspaper columnists, you're forced to conclude that the world needs "Beyond Fear" more than it needs Blowfish.

/*t#R#O#L#L*/ (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8570774)

/*/*/*/*
| TROLL |
/*/*/*/*

De-Italicize (-1, Offtopic)

TwistedGreen (80055) | more than 10 years ago | (#8570777)

Looks like you forgot a closing tag...

</i>

Re: oh, really? (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8571003)

Looks like your closing tag can't handle it either, you ic!!

How about.. (-1)

SpanishInquisition (127269) | more than 10 years ago | (#8570783)

a book about exploiting the tag

Re:How about.. (-1)

Anonymous Coward | more than 10 years ago | (#8570953)

OMG! Who could have expected you?

Re:How about.. (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8571008)

Nobody expects the Spanish Inquisition... O_o Nobody, I say!

eXPloding softwar gangsters' payper liesense? (-1, Troll)

Anonymous Coward | more than 10 years ago | (#8570795)

the FraUDuleNT badtoll is nearly over? lookout bullow.

consult with/trust in yOUR creators.... the kode has been showed.

Exploiting software (0)

Anonymous Coward | more than 10 years ago | (#8570811)

The authors never state that security is not adequately addressed by software engineers but rather there is no lack of mistakes not to be made in designing a product that is inherently insecure.

italics (-1, Offtopic)

golden spud (23221) | more than 10 years ago | (#8570821)

Editors - please fix the non-closed italic tag in the entry so the front page of the site doesn't show up in all italics. Thanks :)

Let the environment help us (5, Insightful)

Lucky Kevin (305138) | more than 10 years ago | (#8570834)

We need to use more intelligent environments to protect us from ourselves (and other less good proogrammers :-)).
Like the security manager in Java and the security "taint" stuff in Perl.

Electron Band Structure In Germanium, My Ass (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570846)

Abstract: The exponential dependence of resistivity on temperature in germanium is found to be a great big lie. My careful theoretical modeling and painstaking experimentation reveal 1) that my equipment is crap, as are all the available texts on the subject and 2) that this whole exercise was a complete waste of my time.

Introduction

Electrons in germanium are confined to well-defined energy bands that are separated by "forbidden regions" of zero charge-carrier density. You can read about it yourself if you want to, although I don't recommend it. You'll have to wade through an obtuse, convoluted discussion about considering an arbitrary number of non-coupled harmonic-oscillator potentials and taking limits and so on. The upshot is that if you heat up a sample of germanium, electrons will jump from a non-conductive energy band to a conductive one, thereby creating a measurable change in resistivity. This relation between temperature and resistivity can be shown to be exponential in certain temperature regimes by waving your hands and chanting "to first order".

Experiment procedure

I sifted through the box of germanium crystals and chose the one that appeared to be the least cracked. Then I soldered wires onto the crystal in the spots shown in figure 2b of Lab Handout 32. Do you have any idea how hard it is to solder wires to germanium? I'll tell you: real goddamn hard. The solder simply won't stick, and you can forget about getting any of the grad students in the solid state labs to help you out. Once the wires were in place, I attached them as appropriate to the second-rate equipment I scavenged from the back of the lab, none of which worked properly. I soon wised up and swiped replacements from the well-stocked research labs. This is how they treat undergrads around here: they give you broken tools and then don't understand why you don't get any results. In order to control the temperature of the germanium, I attached the crystal to a copper rod, the upper end of which was attached to a heating coil and the lower end of which was dipped in a thermos of liquid nitrogen. Midway through the project, the thermos began leaking. That's right: I pay a cool ten grand a quarter to come here, and yet they can't spare the five bucks to ensure that I have a working thermos.

Results

Check this shit out (Fig. 1). [wisc.edu] That's bonafide, 100%-real data, my friends. I took it myself over the course of two weeks. And this was not a leisurely two weeks, either; I busted my ass day and night in order to provide you with nothing but the best data possible. Now, let's look a bit more closely at this data, remembering that it is absolutely first-rate. Do you see the exponential dependence? I sure don't. I see a bunch of crap. Christ, this was such a waste of my time. Banking on my hopes that whoever grades this will just look at the pictures, I drew an exponential through my noise. I believe the apparent legitimacy is enhanced by the fact that I used a complicated computer program to make the fit. I understand this is the same process by which the top quark was discovered.

Conclusion

Going into physics was the biggest mistake of my life. I should've declared CS. I still wouldn't have any women, but at least I'd be rolling in cash

Can't build security on a weak foundation (5, Interesting)

Theodrake (90052) | more than 10 years ago | (#8570865)

I keep reading all these books about how to fix the holes in existing s/w. How to not build systems that are full of holes. It seems we need a decent foundation and its not C. The DoD tried with Ada to design a language that eliminates a lot of the coding mistakes, but it just wasn't embraced. It just seems we are attempting to layer security on top of an inherently insecure software language.

Re:Can't build security on a weak foundation (0)

Anonymous Coward | more than 10 years ago | (#8571011)

power comes with responsibility.

remove all power and you have something secure, and something quite useless.

Re:Can't build security on a weak foundation (2, Interesting)

cmburns69 (169686) | more than 10 years ago | (#8571013)

While C can easily be insecure, it's still one of the fastest and most powerful languages. Other languages are powerful as well, but nobody benchmarks against Java as their base. The reason is because to do all that checking internally, you have overhead.

C does not have that overhead unless you add it. I don't think that adding another layer is the solution. I think better training of coders would help more. (Which is where a book like this comes in)

Re:Can't build security on a weak foundation (3, Insightful)

Beryllium Sphere(tm) (193358) | more than 10 years ago | (#8571480)

>C does not have that overhead unless you add it. I don't think that adding another layer is the solution.

This is a healthy attitude to take while you're writing code. I produce better code when I try to convince myself that bug-free code is attainable.

If you're designing entire systems, it's safer to take the realistic view that human error occurs at predictable rates in the most highly trained people. Highly trained people are still vulnerable to getting tired, distracted, misled, and to fat-fingering things.

The question is whether you're willing to spend CPU cycles (the cheapest thing in the world) to reduce the scope for human error (the most certain thing in the world).

Re:Can't build security on a weak foundation (1)

tedgyz (515156) | more than 10 years ago | (#8571071)

That was actually one of the primary goals in the design of Java. After writing Java code for several years now, I can't imagine having to use strcpy or worry about whether I'm supposed to free an object or if the API I'm using will free it for me.

I'm sure the C/C++ bigots will flame the hell out of me. Nonetheless, many of the exploits we read about every day simply wouldn't exist if a safer language were used.

BTW, I participated in the whole "Ada thing" back in the 80's, having worked for a hardware vendor that needed Ada as a "checklist item" (what a freakin' expsensive checkbox that was!). Ada got a lot of things right, but didn't go quite as far as Java in incorporating inherently safe language constructs. What I really liked about Ada was the package concept, and built-in language constructs for multi-threading. Ada's biggest faults were trying to bring in too many features and the fact that is was ahead of it's time. It was just too damn slow for the hardware of the day.

Re:Can't build security on a weak foundation (1)

maxwell demon (590494) | more than 10 years ago | (#8571248)

I'm sure the C/C++ bigots will flame the hell out of me.

Well, for the C++ part, only for your shown ignorance of C++. I can't imagine calling strcpy (or strncpy, for that matter), too, when I can just write s1 = s2 + s3 with the C++ standard library string class (while not even having to think about how much memory needs to be allocated, or where it needs to be deallocated later). And in C++, all ressource handling (not only memory!) can be handled by a simple rule: Ressources are allocated only in a constructor, and deallocated only in the corresponding destructor.

Now, when speaking about C ... well, I never liked that.

Re:Can't build security on a weak foundation (0)

Anonymous Coward | more than 10 years ago | (#8571433)

deallocated only in the corresponding destructor.

Yeah, that's fine and dandy until you bring about the virtual destructors...

Re:Can't build security on a weak foundation (1)

maxwell demon (590494) | more than 10 years ago | (#8571471)

Where's the problem with them?

Re:Can't build security on a weak foundation (1)

stevey (64018) | more than 10 years ago | (#8571437)

There is far more to security than having to deal with buffer overflow attacks.

Sure Java tends to be more secure, but part of the reason for that is the lack of usage in places where normal C would be - for example I've never seen a setuid(0) Java executable.

For network servers I agree buffer overflows are pretty much prevented by the use of Java. However there are still flaws there to be exploited due to programmer error - such as writing a HTTP server and not filtering out "..".

This kind of problem is independent of the language, a real human error.

Re:Can't build security on a weak foundation (1)

Greyfox (87712) | more than 10 years ago | (#8571759)

My favorite sigsegv in C would result in a (probably uncaught) NullPointerException in Java. Dude created a validateInput function which took a string from a tokenizer and the first thing it did in the function was use strlen() to check the size of the string. Except that the tokenizer liked to return NULL pointers for the string if the input was empty. This caused strlen to hork up a sigsev.

s/tokenizer/StringTokenizer/ in Java and create a validateInput() method that calls .size() on its imput string and you'll get a very similar run time crash of your application. Sure it's a potential access exploit in C while in Java it's only a potential DOS, but changing languages does not magically make your programs (or your programmers) more secure. And 5 gets you 10 after your programmer gets the NullPointerException the first time and isolates the cause, he just puts a catch(NullPointerException e) { } in his code (IMHO Java should not allow empty catch blocks...)

You can make your C++ code pretty secure if you actually use the new language features everywhere. You can't really benefit from, say, std::string, if you never use it anywhere.

Personally I like LISP as a not-too-obnoxious language that had most of the useful features of java a decade or two before java was invented. But that's just me...

Social Engineering (1)

Bs15 (762456) | more than 10 years ago | (#8570899)

I wonder if they talk about security risks regarding social engineering.

The hell with software... (-1, Offtopic)

Anonymous Coward | more than 10 years ago | (#8570903)

I'd rather exploit wetware [ultra-babes.com] .

Invasion of Italics (-1, Offtopic)

ultrajazz (516720) | more than 10 years ago | (#8570909)

The italics have invaded and taken over.

Secure code in risky languages: Hard (5, Interesting)

fuzzy12345 (745891) | more than 10 years ago | (#8570912)

If you've been around long enough to start seeing patterns in software flaws, keep asking yourself: "What misfeatures of the program's source language contributed to this flaw?"

Personally, I find that I rarely need to access the thirty-second element of a thirty element array, and I'd die happy if I never have to type for(i=0; i<offbyone; i++){ again.

I used to think that Perl with taint checking enabled was the cat's meow, but I'm now leaning towards Lisp. For the rest of you, is it fair to blame the programmer when his tools (which are supposed to make his life easier) fail him? Use better tools!

Re:Secure code in risky languages: Hard (1)

Phisbut (761268) | more than 10 years ago | (#8571034)

For the rest of you, is it fair to blame the programmer when his tools (which are supposed to make his life easier) fail him? Use better tools!

Still, a good programmer should know his tools, and know the limits of his tools.

Re:Secure code in risky languages: Hard (1)

asrb (513512) | more than 10 years ago | (#8571452)

Still, a good programmer should know his tools, and know the limits of his tools.

True, but the problem is that there are so many dangerous limits and gotchas in C/C++. This requires a programmer to spend a large chunk of his time and mental energy thinking about and dealing with these issues. That is time that isn't spend on the business/technical problem the software is supposed to solve.

Wouldn't it be better to use a safer language, where one can do something like:

for X in My_List'First..My_List'Last loop
-- Do something with X
end loop;
than to worry about having the size of the list correct, and worrying that you'll introduce a gruesome, hard to discover bug/possible security flaw if you're wrong?

Re:Secure code in risky languages: Hard (1)

maxwell demon (590494) | more than 10 years ago | (#8571638)

You mean like:
typedef std::vector<int>::iterator iter;
for (iter it = myvec.begin(), end = myvec.end();
it != end;
++it)
{
do_something_with(*it);
}
Granted, it's a bit more to type (I'd like to have myvec::iterator!), but where do you have to know the size of the list here?

BTW, if you just call a function, you can have it even shorter:
std::for_each(myvec.begin(), myvec.end(), do_something_with);

Re:Secure code in risky languages: Hard (1)

Phisbut (761268) | more than 10 years ago | (#8571924)

There ya go. There is indeed a whole lot of tools in C/C++. Some are more risky than others.
Programming is no different than any other building job. A good C++ programmer knows what tools to use and what tools not to use. Just like a good carpenter knows which hammer to use in each situation.

Therefore, I believe that one can blame the programmer that his tools failed if he didn't use the right tools for the job.

Lisp is a great tool (0)

Anonymous Coward | more than 10 years ago | (#8571164)

Read Paul Graham's article Beating The Averages [paulgraham.com] .

Use better tools? (0)

Anonymous Coward | more than 10 years ago | (#8571232)

Sure, by using a tool that is 5x slower you can avoid accidents, but that's stupid. If people would simply learn to use the fast tool more carefully, we'd be ok. Being forced to use training wheels to avoid accidents is insulting.

How Secure is Secure?? (5, Interesting)

Un0r1g1nal (711750) | more than 10 years ago | (#8570936)

You can write up about security, and tell people to code properly and validate everything but there are always going to be either exceptionally skilled hackers or totally inept users who will be able to do something that you didn't think of, and cause problems.

Look at some of the breaches that have been published in the last few years, and then consider if they are the ones they published (usually because they caught the person who hacked them) how many more are unpublished, either because they couldn't catch the attacker, couldn't figure out how the hacker got in in the first place or because said attacker penetrated so deeply they didn't want to be embarresed by it.

But as a good point, it allows people to keep being employed in the network security industry :P So all in all it's a good thing...

The problem starts at management... (5, Informative)

kbonin (58917) | more than 10 years ago | (#8570939)

I write security software for a living, occasionally. I'm back in it now, after leaving it for a while out of frustration, among other things.

The problem is that even for us security geeks, its nearly impossible to get management to buy into spending the time to make things secure. Nobody would believe the stories I could tell about having security features gutted 'cause marketing decided certificates were too complicated, or 'cause access control systems were too hard to use, so "just put the built in passwords back in", or "*** wants a copy of the private keys", or "*** told me to tell you to check the private keys into version control". Stupid, stupid...

Until companies start caring about making products that are ACTUALLY secure, instead of just hiring security geeks to act as figureheads, then not letting them do their jobs, then system will continue to get hacked.

Re:The problem starts with the programmers... (2, Insightful)

lelitsch (31136) | more than 10 years ago | (#8571496)

I don't know why this got voted insightful, but you are making a very good point, although involuntarily:
"Security WON'T work until software engineers and programmers get it into their heads that complicated, invasive security procedures don't work if there are any humans around."

If the security procedures are transparent and easy to use from the user standpoint, they will expend and extraordinary amount of ingenuity and cunning to get around them. Usually more than any product designer can spend to develop the product in the first place. This doesn't only aply to software, but to everything else. If you put a different combilation lock on each filing cabinet (very secure), your office workers will tape a list to the bottom of one desk with all the combinations. If you put a different lock on every door, they'll duct tape over the bolts to keep them from engaging.

The same applies to software. Get over it and develop a protocol that doesn't hurt the user and is secure. It's hard, but not impossible.

Sounds interesting (4, Informative)

plcurechax (247883) | more than 10 years ago | (#8570987)

It sounds like I thought I was getting when I bought Hacking: The Art of Exploitation by Jon Erickson, which is fairly basic and easy to read level, other than some of the writing is not as polished as it could be. It did get through the basic concepts explained in various classic Phrack articles but without the 'leet speak which drives me crazy.

It sounds like this is a more serious and rigious book, and those that were turned off of Jon Erickson's Hacking might prefer it. I think I will take a look at it.

I think new programmers are likely better served reading something like Writing Secure Code [microsoft.com] : Practical Strategies and Proven Techniques for Building Secure Applications in a Networked World by Michael Howard and David LeBlanc or Building Secure Software [secureprogramming.com] by John Viega than get bogged down in details of this book.

Re:Sounds interesting (3, Funny)

Anonymous Coward | more than 10 years ago | (#8571109)

Microsoft has a book called Writing Secure Code? I looked in the humour section at the bookstore but I couldn't find it.

Why care about insecure software? (0)

Anonymous Coward | more than 10 years ago | (#8570989)

There are harsh sentences for computer hacking, and U.S.-style laws are being pushed all over the world, so I don't see why we should focus on making software "secure". Lawmakers made it clear that they want to be in charge of computer security, so let's not worry about it. The idea is that if programmers don't have to worry about security so much, then software development time gets reduced, which helps the economy.

Unreasonable expectations. (2, Insightful)

ron_ivi (607351) | more than 10 years ago | (#8570998)

From the article: " early computers were not designed to work in the network environment, and even most software written later was designed to work on benevolent networks "

Ugh that's like saying "most office desks aren't secure since their locks are weak and can be drilled easily".

Part of the problem is one of expectations. When people use insecure components and have unreasonable expectations that the'll magically be safe because one piece asked for a password.

There's nothing wrong with components (desk locks or computers) that expect to be kept in benevolent environments.

Next they'll be telling us Wikipedia's not secure because their password-checker is weak.

Re:Unreasonable expectations. (4, Insightful)

maxwell demon (590494) | more than 10 years ago | (#8571072)

I think his point is that they are not in such an environment any more, due to the internet. That is, your office desk is now at some public place, with lots of people who'd really like to get in. A place for which it wasn't designed, and for which the security doesn't suffice any more.

great book (1)

Dunceor (758412) | more than 10 years ago | (#8571017)

I ordered this book just when it came out and got it about an week ago. Im still reading it since I try to read it very careful and Im writing on a review of it and several other books with exploit techniques also. I can recommend this book and it was a nice review..

It's Kind of Sad... (2, Insightful)

redragon (161901) | more than 10 years ago | (#8571053)

Just in case you were wondering, the list above wasn't just retrieved by a quick search at Amazon. My Master's degree, completed last summer, dealt with the topic of software security, and those are the titles I've read preparing to write the theoretical part.

It's kind of sad that a statement like this is even necessary. It's an interesting statement regarding what kind of qualifications are often necessary just to get a typical reader to give you credit for not being an idiot.

software isn't the problem (3, Interesting)

mabu (178417) | more than 10 years ago | (#8571191)

As a professional in the security business, I'm responsible for handling tens of thousands of financial transactions on a regular basis. My biggest fear regarding security has more to do with the bad habits my clients have than the integrity of the software I use. When it comes to security, having proprietary software can be advantageous in these situations, as there isn't general knowledge of the system's inner workings freely available.

But the biggest security problem was and likely always will be, people who have access to sensitive information that do not act responsibly. Our systems have never been compromised, but once we transmit information to the client, that data becomes a lot less secure, whether it's from a compromised client machine, a rogue employee or a badly-chosen password.

Re:software isn't the problem (1)

duffbeer703 (177751) | more than 10 years ago | (#8571286)

Your argument is akin to "x don't kill people, people kill people". It's a good argument, but it isn't what people want to hear.

Whether you are talking about cars, software or jungle gyms, engineers design things for sensible people.

The problem is that management want silver bullets and airtight systems and users want usability. Nobody really wins in the short term.

Re:software isn't the problem (1)

mabu (178417) | more than 10 years ago | (#8571705)

I agree.

This is because we've turned into a race of people who prefer to ask "What?" or "Who?" instead of "Why?". We've been programmed to expect instant results and preemptive, pseudo-abstract ideas such as "a secure philosophy" don't seem to offer the instant gratification that blaming everything on the software does.

Exploiting people is a lot easier than exploiting hardware and software. Our entire economy is built around exploiting people, but the powers-that-be don't want this issue to be raised because if people become more aware of how their attitudes can empower their security, the powers-that-be wouldn't have much power.

Disappointing (3, Interesting)

WryCoder (18961) | more than 10 years ago | (#8571201)

Focused on Windows and commercial tools. I read part of the online chapter, which I found superficial. When I got to this

"Sharing buffers is somewhat like sharing food. A restaurant (hopefully) maintains strict rules about where raw meat can be placed. A little raw juice in someone s cooked meal could lead to illness and a lawsuit. A typical program has many buffers...."

I'm afraid I exited acroread in disgust.

Re:Disappointing (0)

m1chael (636773) | more than 10 years ago | (#8571771)

So what you're saying is the food is in the computer? Oh I get it, I see... [pounds the computer]

Bill Joy Has The Word (3, Insightful)

rixstep (611236) | more than 10 years ago | (#8571271)

I find Windows of absolutely no technical interest. They took systems designed for isolated desktop systems and put them on the net without thinking about evildoers, as our president would say.
- Bill Joy

I liked (4, Insightful)

g0bshiTe (596213) | more than 10 years ago | (#8571301)

Hacking: The Art of Exploitation
It provided these same thoughts on software design, but also delved into more the ASM side of things. The book went on to state that "there is no such thing as secure code." I believe this statement to be true. With the current patch n sniff state of Windows, it is very easy to overflow a buffer to execute code. I have oft heard someone say my pc is unhackable, I run blah firewall, or X N.A.T. the sad fact is they are as easy to compromise as an unsecured network pc is. With the plethora of IE and other browser vulnerabilities out there you don't need to drive a tank through the front door. Seems though Microsoft left a Window open.

world's largest software company isn't msft... (0)

Anonymous Coward | more than 10 years ago | (#8571359)

it's ibm, still after all these years.

Jon Erickson's "Hacking: The Art of Exploitation" (2, Interesting)

porkrind (314254) | more than 10 years ago | (#8571363)

As a slightly biased source of information (I work for the publisher, No Starch Press) I would recommend the above-mentioned title for those interested in software exploits. It's a great introduction to fundamental ways to exploit software. It may not have quotes from Schneier, but it's a great book. Check it out here: nostarch.com [nostarch.com]

The other half of the problem (4, Insightful)

Brandybuck (704397) | more than 10 years ago | (#8571493)

There's another side to the problem. It's insidious. And while Microsoft is fully embedded in this tar pit of insecurity, Open Source projects are rarely better.

This problem is "feature requests from users." If very few developers understand security well enough to write secure code, think about how much less end users know. Yet it is the end user who pays us. They're our ultimate boss, even on the free-beer Open Source side of things.

At work I've had feature requirements come to me from marketing that would absolutely eviscerate the product's security. I've also seen bug reports elevated to top priority that that would reduce or eliminate product security.

Here are some hypothetical (I hope) examples to show the dangers of this in the Open Source arena. While some of these might have been absurd a few years ago, with today's hyper-concern of usability, it wouldn't surprise me if they actually got implemented.

"It's too much work changing file permissions by hand, so we need a way to automatically execute arbitrary files."

"It's too much work remembering passwords, or remembering the master password for a password manager, so there needs to be a daemon running that will remember for us."

"Messages in XYZ email client should be automatically rendered in HTML/CSS/Javascript."

"The interface is too cluttered! Hide file name extensions!"

Or my all time favorite...

"Linux needs a InstallShield clone!"

Non-computer people just do not understand it (1)

Lucky Kevin (305138) | more than 10 years ago | (#8571789)

Despite all the security flaws being found in software and all the fixes coming out constantly, the man in the street has no idea what we are talking about.

A few years ago I was designing a radio network protocol for an electricity distribution client to control their switches. I said that I was going to encrypt the information, they said "Why?" After explaining the problems they simply replied that they had their own cell phone network for the transmissions, no one would break into it and they weren't interested. I still encrypted it though!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?