Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Security — Open Vs. Closed

kdawson posted more than 7 years ago | from the depends-what-the-meaning-of-is-is dept.

Security 101

AlexGr points out an article in ACM Queue, "Open vs. Closed," in which Richard Ford prods at all the unknowns and grey areas in the question: is the open source or the closed source model more secure? While Ford notes that "there is no better way to start an argument among a group of developers than proclaiming Operating System A to be 'more secure' than Operating System B," he goes on to provide a nuanced and intelligent discussion on the subject, which includes guidelines as to where the use of "security through obscurity" may be appropriate.

cancel ×

101 comments

Sorry! There are no comments related to the filter you selected.

What does slashdot think? (5, Funny)

Anonymous Coward | more than 7 years ago | (#17909748)

I wonder which side slashdot will take in this argument...

Re:What does slashdot think? (1)

HomelessInLaJolla (1026842) | more than 7 years ago | (#17910042)

After the recent mod bomb frenzy [slashdot.org] , I'm going to try and duck the cutting swath of industry employees and begin with...

"Um. I have no opinion but, if I did, I support whichever puts more food on more people's tables and pays more people's mortgages."

How's that for the mods?

Re:What does slashdot think? (1)

toadlife (301863) | more than 7 years ago | (#17912290)

Or you could just not say stupid things.

Re:What does slashdot think? (1)

HomelessInLaJolla (1026842) | more than 7 years ago | (#17912534)

Ha. "Because everything you say is stupid."

omg pwnt.

Re:What does slashdot think? (2, Funny)

mopower70 (250015) | more than 7 years ago | (#17912272)

Operating System B! We are definitely, firmly on the side of Operating System B!

Re:What does slashdot think? (1)

Sillygates (967271) | more than 7 years ago | (#17914844)

Who in their right mind would use BeOS?

endless debate (3, Insightful)

cpearson (809811) | more than 7 years ago | (#17909774)

Applications and systems developed that are developed rapidly by a small set of programmers would benifit from closed source security especially when producing software for small niches. Systems that are developed on a large scale and mission critial applications benefit from open source models because that can utilize a large tester base.

Vista Forum [vistahelpforum.com]

Printable view link (1, Informative)

Anonymous Coward | more than 7 years ago | (#17909868)

http://www.acmqueue.com/modules.php?name=Content&p a=printer_friendly&pid=453&page=2 [acmqueue.com]

Cleverly hidden on page 2 of 4 advertisement-riddled pages. You would think ACM could focus on the content with less distractions than other sites...guess not.

Re:endless debate (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17909926)

Are you this cpearson [cpearson.com] ? If you are, your excel website is very helpful, thanks a ton.

Re:endless debate (3, Insightful)

HomelessInLaJolla (1026842) | more than 7 years ago | (#17910152)

Systems that are developed on a large scale and mission critial applications benefit from open source m0dels because that can utilize a large tester base
I see it in terms of receiving what was paid for.

A program which costs $200 (typified as the industry and closed source) should not be relying on the consumer to be the (security) beta testers.

A program which costs nothing, or only a nominal amount (typified as FOSS), is able to ethically rely on the consumer base to be (security) beta testers.

If I paid for it then it should work (shouldn't break/shouldn't be so easily exploitable). If I didn't pay for it then I should expect to make a contribution.

Right now the industry is addicted to charging production quality prices for beta (even alpha) quality software.

Re:endless debate (1)

PWill (1006147) | more than 7 years ago | (#17910850)

If I paid for it then it should work (shouldn't break/shouldn't be so easily exploitable). If I didn't pay for it then I should expect to make a contribution.
This isn't always how it happens in the real world, though. If the open project has a huge community (e.g. Apache) there tend to be even fewer exploits than in commercial software. May I remind you of IIS [zdnet.com] ?

Re:endless debate (0)

HomelessInLaJolla (1026842) | more than 7 years ago | (#17911840)

> This isn't always how it happens in the real world

Not since '94, no. (circa) '94 was the year in which the government began authorizing enormous amounts of taxpayer money to be funneled into the stock market under the auspices of technological and computing grants.

To sum it up: the problems experienced in security and coding today are a logical result of the artificial inflation of the computing industry for one single purpose. The profit of the politicians who authorized the spending and the bankers who were the first to line up at the trough.

Had the industry been allowed to develop on its own, without the interference from the government and the bankers, the entire landscape would look much nicer than it does today.

Re:endless debate (1)

toadlife (301863) | more than 7 years ago | (#17912232)

"If the open project has a huge community (e.g. Apache) there tend to be even fewer exploits than in commercial software. May I remind you of IIS?"
In the last four years, there there have been far fewer exploits discovered for IIS than Apache.

So...you were saying?

Re:endless debate (0)

Anonymous Coward | more than 7 years ago | (#17914476)

The exploits in Apache get discovered, and then fixed because it's open source. The exploits in IIS mostly go unnoticed, since the users don't have access to the code.

Re:endless debate (0)

Anonymous Coward | more than 7 years ago | (#17913218)

A program which costs $200 (typified as the industry and closed source) should not be relying on the consumer to be the (security) beta testers.

You didn't read the EULA. You are subject to anything that piece of paper says for as long as you continue to use that product.

Take this example: You rent a car. (From your statement, I take it you think 200 smackers is an expensive piece of software, so let's make the rental car a luxury ... ummm, Cadalac Escalade.) You take said vehicle to downtown N.Y.C./Detroit/Houston/L.A./S.F./St. Louis/Chicago/etc. and park it in an unsecured area in a known bad part of town. Despite your best efforts at securing the car, said car is broken into and all your stuff is stolen. Presume the thief had no experience or want of the vehicle itself.

Now, who's to blame for the actual action of the thief getting into the car (not stealing your stuff, or any motivations)? The car manufacturer? The rental company? The police dept.? I'll bet you didn't read the fine print in your contract (EULA equivalent): You are responsible. Why? It was one of the conditions you agreed to for use of the car. This doesn't prevent you from trying to sue anyone you feel like suing over it. Of course you'll be laughed out of court when the defense reads your *signed* rental contract into evidence.

Why you secured the car in the first place I will not go into, for obviously due to your statement above you think that the manufacturer or rental company should have locked the doors, set the alarm (which consists of releasing trained killer pit-bulls, an automated heat-targeting 6-barreled 30mm vulcan cannon popping out of the roof, and a self-destruct mechanism as a last resort to keep the thief from your stuff), posted an armed guard outside, and parked the car in a secured area for you, right?

THIS is the primary reason I don't, or try my best not to use closed source software. I do not want to pay through the nose for buggy, beta closed source, not even being told that it's buggy and beta.

Right now the industry is addicted to charging production quality prices for beta (even alpha) quality software.

You are complaining about paying 200 bucks for an OS. I have paid 2 and 3 thousand dollars US for OS UPGRADES, in addition to many thousands of dollars a quarter to keep our support agreements up-to-date. Granted, these were not OSes that were considered 'buggy' or 'security problems', but when there were problems you still had to deal with closed source rules, and there were still EULAs to be followed. Ditto for multi-million dollar accounting systems. Let me tell you from experience: there is nothing that will get you more apprihensive and paranoid than the Director of Finance for your corporation standing over your shoulder at 3 in the morning the day before paychecks are to go out, while you are calling halfway around the world to leave a "call-me-back" message on a financial software publisher's answering service to get then to help you fix whatever is wrong with the Corp's twenty-plus-million-dollar financial system.

His rule of thumb is useful. (4, Insightful)

Kadin2048 (468275) | more than 7 years ago | (#17910320)

Actually, his conclusion contains a far more useful test, although it does boil down to common sense:

The difference between these cases is simple: determinism. In the case of the encryption software, the outcome is deterministic. Knowing everything about the mechanism doesn't compromise the security of the outcome. In contrast, for antivirus software the system is heuristic. As such, some things benefit from disclosure, and some things don't. In these two cases, it's obvious. Unfortunately, that's the exception, not the rule. The problem is that many systems contain aspects that are heuristic and aspects that are deterministic.
In essence, the question is to ask whether closing the source really results in any increased security; in the case of DRM systems (his example), it does, because they are broken by default and thus knowledge of the 'algorithm' allows the system to be cracked.

Personally, I would argue that such 'heuristically secured' systems are broken by default, and that there are good reasons why generations of computer scientists have insisted that security through obscurity is meaningless. The "security" provided by such heuristics are of value only to marketing and legal departments, they are not and should not be confused with the security offered by 'deterministically secured' systems (e.g. cryptography is his example). Saying that an application is "secure," when it depends on an attacker not knowing how it works, borders on unethical false advertising.

Re:endless debate (1)

truthsearch (249536) | more than 7 years ago | (#17910364)

What does the openness of the code have to do with the size of the tester base? Closed source applications can be downloaded just as easily as open source apps. Windows has had hundreds of thousands of beta testers.

Re:endless debate (1)

newt0311 (973957) | more than 7 years ago | (#17916810)

try testing something when you have the source and when you don't. HINT: Its much easier with the source, at least for those who care. It makes it possible to spot fishy code paths, invalid typecasts, etc. and insert code to try test those things specifically.

Re:endless debate (0)

Anonymous Coward | more than 7 years ago | (#17910968)

Applications and systems developed that are developed rapidly by a small set of programmers would benifit from -->open source security especially when producing software for small niches. Systems that are developed on a large scale and mission critial applications benefit from -->close source models because that can utilize a large tester base.

Sounds just as good...

closed source is just one aspect (5, Insightful)

fred fleenblat (463628) | more than 7 years ago | (#17909784)

Businesses that choose to develop closed-source software seem to also choose to ship code prematurely, to over-provision with extra features, to decide on features for marketing rather than security or quality reasons, and generally compromise the product in multiple ways. In that light, closed source isn't itself the security problem, it's just an indicator that there probably are other problems lurking.

Re:closed source is just one aspect (3, Interesting)

ThinkFr33ly (902481) | more than 7 years ago | (#17909982)

But those same companies are at the mercy of consumers, just like anybody else. If there is enough bad press due to the poor security of the product, the company will be forced to fix things. This is especially true for companies that sell software to large corporations.

Microsoft really is a case in point. They did a lot of what you described, got nailed for it by the press, by consumers, and by corporations, and they really did change their ways. Their Secure Development Lifecycle [microsoft.com] has turned out some pretty high quality releases. For instance, IIS 6 has far fewer vulnerabilities than Apache. One certainly couldn't say that for IIS 5.

Re:closed source is just one aspect (1)

grasshoppa (657393) | more than 7 years ago | (#17910230)

But those same companies are at the mercy of consumers, just like anybody else. If there is enough bad press due to the poor security of the product, the company will be forced to fix things. This is especially true for companies that sell software to large corporations.

You really think that. It's cute. Now let me tell you how it works in the real world; Software has such a percieved cost for development ( factual or not ) that once a company comes out with something that sorta works, no one else is willing to jump ship. On top of that, the companies that are the first to market typically focus more on vendor lock in than features or stability. Thus, you have mini-market monopolies, where the customer is at the mercy of their developer ( through piss poor decisions made years ago. Thank you, upper management ), sometimes even if there is an alternative that is better.

So you'll get situations where there is a piece of software installed, and people know not to click a certain thing or it'll blow up the application. I work with an application on a daily basis that is still running in the windows 16bit subsystem. Wanna hear somethign funny? Every now and then an end user system will magically blow up, taking the 16bit system with it. No way short of a wipe and reload to make that app work on that system again. This happens even when users are limited users. Why do we still use it? Well, because all our data is in there, and there is no one else that lies to our upper management like these guys do. Plain and simple.

Microsoft really is a case in point. They did a lot of what you described, got nailed for it by the press, by consumers, and by corporations, and they really did change their ways. Their Secure Development Lifecycle has turned out some pretty high quality releases. For instance, IIS 6 has far fewer vulnerabilities than Apache. One certainly couldn't say that for IIS 5.

If MS really is a changed corporation ( which remains to be seen ), they'd be the exception to the rule. And how, exactly, did they get nailed for their behavior by consumers? Did consumers stop buying their crap? Obviously not. So how?

Re:closed source is just one aspect (1)

sheldon (2322) | more than 7 years ago | (#17911772)

If MS really is a changed corporation ( which remains to be seen ), they'd be the exception to the rule. And how, exactly, did they get nailed for their behavior by consumers? Did consumers stop buying their crap? Obviously not. So how?
By bad press.

This is an aspect of the Free Market that I don't think some people fully acknowledge. The invisible hand is not just the consumers buying the product, but those who don't buy the product and complain openly about it. Those open complaints do build up, and you can have a tipping point where people just start abandoning your product because they are sick and tired of the problems. I think we're starting to see this with Ford and GM now with Toyota as the dominant auto maker, as an example.

There's also that problem of the Inovator's Dilemna, where you can get yourself into a position where you are only pleasing existing customer and ignoring non-customers... where the inevitable is all your customers end up becoming non-customers as they see the new product. i.e. Polaroid losing out to digital cameras.

Microsoft recognized that the problem had gotten bad enough that if they didn't act to change things, in the future they would likely start losing customers.

Re:closed source is just one aspect (2, Interesting)

VolciMaster (821873) | more than 7 years ago | (#17910744)

For instance, IIS 6 has far fewer vulnerabilities than Apache. One certainly couldn't say that for IIS 5.

I've never heard anyone quote such a stat. Where does said statistic come from

Re:closed source is just one aspect (2, Informative)

ThinkFr33ly (902481) | more than 7 years ago | (#17911108)

See: http://blogs.msdn.com/michael_howard/archive/2004/ 10/15/242966.aspx [msdn.com]
See: http://rmh.blogs.com/weblog/2005/05/is_microsoft_i i.html [blogs.com]

Those posts are somewhat old, but the trend apparently continues if you go check Secunia, or your favorite vulnerability lists.

Simple (4, Insightful)

Anonymous Coward | more than 7 years ago | (#17909804)

The Operating System most secure is the Operating System less used.

Re:Simple (1)

$RANDOMLUSER (804576) | more than 7 years ago | (#17910054)

> The Operating System most secure is the Operating System less used.

So, OpenVMS, then?

OT: Things you can't ask about VMS. (2, Interesting)

Kadin2048 (468275) | more than 7 years ago | (#17910690)

This is slightly off-topic, but a while back I got interested in OpenVMS, and VAX stuff in general. (I started doing some research because I thought I was going to get stuck doing some turd polishing of old mainframe software, but it never materialized. But by then I was just interested.) Even in hindsight (given that I think we can agree that UNIX-derivatives seem to have gained traction over VMS), it's extremely difficult to find any sort of rational comparisons of VAX/VMS and its architecture and design paradigms to that of UNIX. Whenever someone asks, the response is basically "don't ask [hp.com] , you don't want to start that." Nobody wants to talk about anything that might invite UNIX/VMS comparisons, because it will cause flamewars -- even though such a discussion, at this point, might be interesting and productive. (There are so many people around who aren't familiar with VMS, or anything other than Windows and UNIX, that any perspective besides those would be worthwhile.)

At any rate, it struck me as interesting, because sometimes it's easy to assume that Windows/Linux (or Windows/Mac, or Windows/something) is the first Great OS War. But people have been getting emotionally attached to operating systems, probably as long as they have existed; and ever since, it has helped quash rational discussion, both through flamewars themselves, but also because of self-censorship that occurs, in order to try and prevent arguments.

Re:OT: Things you can't ask about VMS. (1)

$RANDOMLUSER (804576) | more than 7 years ago | (#17911088)

If you ever did any assembly language programming in the MS-DOS 3.2 days, you've got a slight, tiny flavor of what writing on VMS was like. All the system calls had assembly interfaces, with lots and lots of bit fields and variant records; so it was like "if the third bit in the second word of a syscall parameter block was set, then the fourth and fifth words would be string descriptors, but if the fourth bit was set, then those words were integers that meant something else entirely". The documentation would hop you from binder to binder to binder to binder, often in a full circle.

The "C" header files (with a non-standard compiler) were even more horrific, things like:
union {
  struct a {
      int m;
      int n;
  }
  struct b {
    unsigned p :4;
    unsigned q :8;
  }
  union {
...more of the same, only nested...
  }
}

I'm sure in it's day, in VaxMacro, you could do some amazing stuff, but wow, the getting there was gahstly.

Re:OT: Things you can't ask about VMS. (1)

SL Baur (19540) | more than 7 years ago | (#17914364)

VMS was a great system for its time, but it was always like a beautiful rose that smelled bad. It supported things that are now only now getting into Linux, particularly, all system calls could be called asynchronously. It had real-time scheduling that could be made hard real-time and fine-grained permissions, both on files with RMS and on process priveleges. It had a rich set of primitives for doing parallel user-land programming like AST callbacks and the lock management system. Fun stuff once you wade through the generally difficult documentation.

It also included hidden interfaces that were basically undocumented and forbidden to non-DEC code. The CLI DCL, was a special process that overlayed parts of its address space with the programs the user invoked on the command line. As far as I remember, the process by which a programmer could do that, was largely undocumented.
My most memorable experience was trying to duplicate the actions of VAX/TPU where it was possible to invoke LaTex from inside the editor. No matter what I tried, I could not make user code work the same way (LaTex would block waiting for interactive input). And that included a session where I was on the phone with Digital technical support, supposedly with the VAX/TPU source code in front of him and me reading line-by-line what I was doing.

If any of that sounds familiar, it's probably because when the VMS team was disbanded at Digital, the core went to Microsoft to do Microsoft Windows NT.

Re:Simple (2, Interesting)

Marillion (33728) | more than 7 years ago | (#17910086)

I don't agree.

The central server for a system of airport flight information display screens (FIDS) where I once worked ran an operating system called iRMX. It had pathetic security. The only thing that kept that system secure was the lock on the door to the room.

Re:Simple (3, Informative)

$RANDOMLUSER (804576) | more than 7 years ago | (#17910312)

That was Intel's Realtime Multitasking eXecutive - a REAL TIME operating system. Security wasn't its job. You may as well ask how the security on QNX or a PLC is. Answer: nobody cares, as long as the I/O completes on time.

Re:Simple (O/T) (1)

ampathee (682788) | more than 7 years ago | (#17911394)

Your sig can be simplified to:
ruby -e "[1383424633,543781664,1718971914].each{|x| print([x].pack('N'))}"

I agree with the output though :)

Re:Simple (O/T) (1, Funny)

Tony Hoyle (11698) | more than 7 years ago | (#17912652)

Your sig can be simplified to:
ruby -e "[1383424633,543781664,1718971914].each{|x| print([x].pack('N'))}"


You must be using some definition of 'simplified' I wasn't previously aware of.

Re:Simple (O/T) (1)

ampathee (682788) | more than 7 years ago | (#17912898)

Simplify: to make simpler or reduce in complexity.
It's simpler than the original, mr. smarty-pants.
Anyway, ruby -e "puts 'Ruby is fun'" wouldn't be very interesting now, would it?

Re:Simple (2, Funny)

bssteph (967858) | more than 7 years ago | (#17910124)

The Operating System most secure is the Operating System less used.

I've written the most secure operating system in the world. No, you can't have it. I forgot where I put it.

Re:Simple (2, Interesting)

CastrTroy (595695) | more than 7 years ago | (#17910168)

Why wouldn't people want to use a secure operating system? I know you're trying to say that the vulnerabilities only show up once the people try to break the system, and crackers only try to break popular systems. However, I don't believe that it's a tautology that a system has to have vulnerabilities. If they developed a system that actually didn't have vulnerabilities, and actually ran all the necessary software, then wouldn't everybody start using that? I think the only thing holding back Linux is good hardware and software support. The "operating system" including the kernel up to the desktop environment is very good. Only problem is that a lot of hardware doesn't work well, and there isn't a lot applications you can run that will run on windows.

LOL - Next time RTFA (0)

Anonymous Coward | more than 7 years ago | (#17910224)

The author already mentioned MS-DOS. See his comments, which refute your point.

"it might be possible to conclude that MS-DOS with a TCP/IP stack is more secure than a fully patched Windows XP box..."

Please show me one real security professional who would suggest using MS-DOS for a system which
had to be secure.

Re:Simple (0)

Anonymous Coward | more than 7 years ago | (#17910352)

Thus far my NeXTStation has never been compromised..

Re:Simple (1)

RAMMS+EIN (578166) | more than 7 years ago | (#17910452)

That depends on your definition of "secure". To me, how much something is used has nothing to do with that. What you're saying to me is like saying "the boat that floats best is the one least used". By that definition, a model of which 100000 have been built, out of which 3 have sunk, floats worse than a model which would always sink, but which nobody has let into the water yet. That's not how I want my security to be.

Re:Simple (1)

DittoBox (978894) | more than 7 years ago | (#17910700)

Security by obscurity...isn't.

Re:Simple (1)

daigu (111684) | more than 7 years ago | (#17912248)

Oh, so THAT's why OpenBSD [openbsd.org] is relatively secure. If more people started using it, I guess it would suddenly get less secure. Thanks for clearing that up.

Your comment gets at the issue that there are more exploits for more commonly used systems. Still, it may be that more secure systems may be used less because they are more difficult (or expensive or whatever) to use - same is probably true of security's component parts such as passwords, physical security, etc.

You Can't Know Which is More Secure (4, Insightful)

RAMMS+EIN (578166) | more than 7 years ago | (#17909808)

With regards to the question which product is more secure, the only right answer is that you will never know. The problem is that you can't eliminate bias from a test that is supposed to assess this. Since a single product can't be both open source and closed source, you will always be comparing multiple products. As stated earlier, you can't reliably establish the relative security of these products, let alone attribute the result to open vs. closed source.

Re:You Can't Know Which is More Secure (1)

Red Flayer (890720) | more than 7 years ago | (#17909984)

As stated earlier, you can't reliably establish the relative security of these products, let alone attribute the result to open vs. closed source.
Well, the point of the article was that you can;t even get to that point, since there is no widely accepted measurable definition of 'security', no inclusive metric of security. This means there is no way to define a 'more secure' approach, and therefore all we can do is discuss individual products in comparison with one another.

Re:You Can't Know Which is More Secure (4, Interesting)

RAMMS+EIN (578166) | more than 7 years ago | (#17910376)

``This means there is no way to define a 'more secure' approach, and therefore all we can do is discuss individual products in comparison with one another.''

And I'm saying that even that is pretty meaningless. Five vulnerabilities were fixed in Mozilla last week, and two in Opera. Which is more secure? Twelve new vulnerabilities have been discovered in Firefox, and one in Opera. Which is more secure? The Apache servers in our sample have been broken into 50 times during the course of our study, compared to 3 break ins for lighttpd. Which is more secure? A team of five experts found three vulnerabilities in the NT kernel and two in Linux. Which is more secure? Static analysis found 10000 possible vulnerabilities in Konqueror and Microsoft reports static analysis found 1000 possible vulnerabilities in MSIE. Which is more secure? Which of the mentioned products should you select, based on the given facts, if your goal is to minimize future break ins?

I honestly don't know the answer to any of the questions I asked. I really think none of the (fictional) data I gave says anything about the relative security about the products it ostensibly pertains to. I _feel_ more secure running OpenBSD than Windows 2000, and, given the absense of reports of OpenBSD machines being broken into on a large scale, that feeling seems justified. But this is entirely based on something that I _don't_ know. I _don't_ know that OpenBSD machines are massively broken into, and thus, I feel safe. However, I also don't know that they are _not_ massively broken into, so my feeling could be entirely misplaced. I certainly don't know that there are no holes in OpenBSD, so even if it hasn't been massively exploited up to now, it could start tomorrow. All I have is the assurance of the developers that they make great efforts to improve security. I believe them, hope they are indeed doing so, and hope they are actually _achieving_ better security that way. But I don't _know_ that.

Re:You Can't Know Which is More Secure (1)

headkase (533448) | more than 7 years ago | (#17911652)

You are doing a really good job at summarizing the first page in both your original and this post. But. Did you read the other three pages? They discuss the advantages and disadvantages based on specific scenarios for both methodololigies contrasting how each approach (and mixed approaches) fares.
You've anchored yourself to a position that can't be assailed but that's not the interesting part. Go read the other three pages.

Re:You Can't Know Which is More Secure (1)

grcumb (781340) | more than 7 years ago | (#17911680)

And I'm saying that even that is pretty meaningless. Five vulnerabilities were fixed in Mozilla last week, and two in Opera. Which is more secure? Twelve new vulnerabilities have been discovered in Firefox, and one in Opera. Which is more secure?

Your point's well taken, but your conclusion (here and in your first post above) are hopelessly fatalistic.

You don't give nearly enough credit to the analytical process. Instead you focus on points that might philosophically be true (e.g. "no app is open and closed at once, so comparison is pointless"). In practical terms this is meaningless and finicky. It's perfectly straightforward to perform a comparison between the suitability for use of two different applications that perform largely the same task. Assessing the security risks inherent in using one or the other is also a rather finite (if not entirely methodical) task. Experienced and professional analysts do this consistently well.

Security analysts aren't (or shouldn't be) blind to the limitations you allude to, but somehow they do manage to render a useful service. My suggestion to you is that you show a little more optimism, and at the same time, forget about certainty. Security analysis is not like mathematics, there's no proof; only evidence and the wisdom of experience.

Schneier's point that security is a process, not a product, comes into play here. You can't say, "This product has 14.2% more security than that one!" But you can say, "Experience shows us that this application is exploited less than that one, and our attempts to break it were less successful."

The Quantity of the Eyes Isn't Always The Issue (4, Insightful)

ThinkFr33ly (902481) | more than 7 years ago | (#17909854)

One supposed advantage of open source software is that, well, it's open. Everybody can take a look and see if the code has holes. The idea being that the more eyes that look at something, the greater a chance of somebody seeing bugs.

But the quantity of eyes isn't always the issue. I could put the Linux kernel source code in front of 1 million six year olds, and there is very little chance any of them would find a single bug.

Obviously, we're not talking about six year old eyes here, but continue the scenario. There are some types of bugs that even very experienced coders wouldn't necessarily spot. Not every kind of security hole is a simple buffer overflow. Some kinds of issues will really only be spotted by a highly trained and specialized set of eyes.

Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.

Re:The Quantity of the Eyes Isn't Always The Issue (1)

iaculus (1032214) | more than 7 years ago | (#17909992)

(Open) source code is more easily human readable than binary. The humans that are looking at the (compiled binary) code of closed source software are probably doing so illegally, and the minimum knowledge required to read and understand it is greater than the minimum knowledge required to read and understand source code. So the people who just poke around to have a look are less likely to report bugs, because that code was supposed to be closed source. And the people looking for ways to break it are probably more likely to have nefarious intentions.

Re:The Quantity of the Eyes Isn't Always The Issue (1)

Doctor Crumb (737936) | more than 7 years ago | (#17910048)

It's not so much the number of eyes on open source software, as the lack of eyes on closed source software. Given few enough eyeballs, all bugs are left unfixed, as the developers are off working on their other 30 feature requests and don't have time to fix security on something that works well enough.

Re:The Quantity of the Eyes Isn't Always The Issue (3, Insightful)

danpsmith (922127) | more than 7 years ago | (#17910802)

Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.

I think, however, the "open source is more secure" argument tends to follow the idea that behind the scenes, the code under closed source applications tends to be generally faulty, or, at least, Windows code in particular. There could very well be many exploits that, given the code for MS Vista, amateur programmers could easily pick out, simply because the code base is so vast and the amount of people who have full access to it so few.

It's just like if I write my own little closed source app, at first it may appear to be flawless to me because I am the only one seeing the code. But I might code in an inherently buggy way that would be easily picked up by another set of eyes. Then, as little problems flood in from end users, instead of fixing my coding methodology, I make little fixes to the code that are basically workarounds around perhaps solving a bigger problem that would require more time (something more fundamental to the way the program is structured). As an effect, the "patches" become more and more around fixing faults than providing the functionality intended in the first place. Whereas with open source, someone might've already just forked my project and coded the idea using different data structures or in a largely more efficient way.

It's not to say that I couldn't be flawless, but, the odds decrease when nobody can see the results. Using closed source software is like running a car without access to the engine. You see things going wrong, but as far as why and how they are happening, if they are huge problems or only small ones, you can't determine without diving into the actual car's components directly. Closed source doesn't allow this. It's not just the fact that there are multiple eyes, then, it's the fact that those eyes are outside the original coder, potentially, sometimes even being the people having the problems themselves. It takes the "how do we recreate the bug?" discussion out, and oftentimes a sufficient end user can not only support his/herself, but improve the codebase.

Honestly, seems like a better approach. The hard thing is you can't know which is more secure really. But in practice, let's be honest, Linux and OSS get fixed more quickly if they are a widely used project in the OSS community than MS products and "patch tuesday" where they schedule patch releases and recommend strange workarounds for existing security breaches.

Re:The Quantity of the Eyes Isn't Always The Issue (1)

Tony Hoyle (11698) | more than 7 years ago | (#17912784)

I think, however, the "open source is more secure" argument tends to follow the idea that behind the scenes, the code under closed source applications tends to be generally faulty,

Having worked for many closed source companies I believe this to be generally true (scarily, with no exceptions I've seen.. although I believe they must exist). Deadlines are king and they really don't care whether the code is crap and will fall apart in a couple of years time... they want to get something out of the door *now*.

In OSS normally deadlines aren't the same pressure, and bad code is less common (although I've seen some appalling OSS code in my time).

I'm prepared to believe MS have cleaned up internally and their code is better than most, but the deadline pressure is there (as can be seen from some of the more obvious vista bugs they didn't get time to fix).

The quality of the unknown eyes is what matters (2, Insightful)

jd (1658) | more than 7 years ago | (#17912654)

With closed source and "security through obscurity", you do not know - nor have any means of knowing - who is examining the code, their qualifications, their abilities or their resources. The same is equally true of open source. The difference is that, for closed source, you eliminate your ability to either compensate for, or exploit, this unofficial work. It will happen - code is stolen all the time, even from companies as closed-up as Cisco - but even to acknowledge it could cause irreparable harm. The number of well-publicized cases is very small, compared to the number of cases that are shown later to have happened.

Closed-source, then, offers no meaningful protection to the companies involved. Precisely because they have no objection to stealing from competitors, corporations who rely on trade secrets and security through obscurity invalidate the very model they are based upon. If you work on the basis of all people being corruptible, you cannot also work on the basis of people not being corruptible. If you abuse the trust of others, you will inevitably be subjct to the abuse of trust.

Open source doesn't guarantee that the eyes looking at the code are of any particular quality, or that they'll give information back, or that they won't steal the code anyway. But at least you know the possibilities and accept them, you don't pretend they don't exist.

In the end, the difference between the two models is that one deludes the managers into believing they have something nobody else has. Open Source has its own delusions - that the developers can do a damn thing if a corporation takes the code, patents it, and sues said developers into oblivion, for example. One could argue that both are virtually unsurvivable disasters and that you might as well go for the one that gets you the money and the groupies. On the other hand, the reality is that programmers don't make money (managers do) and the last geek known to have had groupies was Socrates.

Re:The Quantity of the Eyes Isn't Always The Issue (1)

AnonymousRobin (1058634) | more than 7 years ago | (#17913016)

The flaw is that you'd be putting it in front of 1 million 6-year olds who'd have roughly the same experience and education level. The idea is that given enough eyeballs, chances are one of them will be trained in a particular way that it will be shallow. It isn't about skill level as much as diversity in experiences, backgrounds, and perception. Because I'm an underwater basket weaver, I may see a particular problem in a certain light that makes it obvious to me, even though Bob, the Ph.D. in everything except underwater basket weaving, Nobel Prize winner couldn't see it simply because he's got a different background. It's also possible that a 6-year old who knows nothing about computers may see a way to improve something that nobody who didn't think like a 6-year old would ever consider. Not as likely, but still possible, and throwing in another 6-year old into the mix won't hurt.

I think that's the point of the statement - that it is less about the difficulty of bugs and more about how you see them. It asserts that all bugs CAN be shallow, and with enough people, you greatly increase your chances of seeing it in a way that it is.

Re:The Quantity of the Eyes Isn't Always The Issue (1)

wirelessbuzzers (552513) | more than 7 years ago | (#17915372)

Now, those highly trained eyes may be looking at the open source code, or they may not. All I'm saying is that the quote "Given enough eyeballs, all bugs are shallow" is not particularly accurate.
Yes. There are experienced eyes on it, though, and that's security researchers. One of the most common types of papers in systems security research is automated bug finders, and one of the standard metrics of bug-finding is "how many bugs can you find in the Linux kernel?"

Of course, in many cases, proprietary developers adapt these bug-finders to work on their software, and so they benefit too. Microsoft has an in-house bug-finding system which is pretty effective at eliminating buffer overflows and similar attacks. But despite all this, high-profile open-source projects get the most attention.

Well... (5, Funny)

Zebra_X (13249) | more than 7 years ago | (#17909872)

While Ford notes that "there is no better way to start an argument among a group of developers than proclaiming Operating System A to be 'more secure' than Operating System B,"

Unless of course Operating System A is Open BSD ;-)

Re:Well... (-1, Troll)

leenks (906881) | more than 7 years ago | (#17910386)

And unless Operating System B is even more crippled (hard to imagine, I know, but maybe it's possible).

Re:Well... (1, Funny)

mangaskahn (932048) | more than 7 years ago | (#17912096)

I'd say an operating system is very secure when it's dead!

does a password = security through obscurity? (1)

TubeSteak (669689) | more than 7 years ago | (#17909922)

FTFA - For example, passwords are the perfect example of "acceptable" security through obscurity: they are useful only if the attacker doesn't know them.

I would have thought that the password authentication method was the part that needed to be secured.

Just look at how many times an auth method has been exploited to bypass passwords entirely.

Re:does a password = security through obscurity? (1)

99BottlesOfBeerInMyF (813746) | more than 7 years ago | (#17910336)

I would have thought that the password authentication method was the part that needed to be secured.

Lets see for today a given /24 has on average 57 ongoing SSH login/password dictionary attacks ongoing making it the 4th most common type of network attack. The obscurity part of this defense is essential, but I'm certainly going to restrict my boxes to allowing SSH attempts from couple of specific IPs as well. Security through obscurity is a time tested and vital part of security, but at the same time it better not be your only security. To get this back on topic (sort of) closed source versus open source is not a question of "is obscurity useful" it is a question of "does that obscurity to some people lead to greater or lesser security than the many eyes approach?"

The Wrong Question (5, Insightful)

ThosLives (686517) | more than 7 years ago | (#17909936)

This debate is all about the incorrect question. The reason is that code can be secure or not secure, regardless of its "open" or "closed" status.

Until the industry realizes that "secure is secure" and stops worrying about the open or proprietary nature of things, this debate will probably prevent things from being as secure as they could be by diverting resources to an analysis rather than any solutions.

Put another way: Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".

The same applies to software.

Re:The Wrong Question (1)

fahrbot-bot (874524) | more than 7 years ago | (#17910178)

Put another way: Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".

Although in this analogy, the homemade door would be built and installed by the homeowner him/herself who also happens to be a door professional doing the work on his/her own time.

In this case, I would argue the homeowner has a higher stake in doing good, secure work as their "personal investment" in a quality job is higher.

Re:The Wrong Question (1)

bberens (965711) | more than 7 years ago | (#17912684)

This is an unusual analogy because a general rule of thumb indicates that home built projects (woodworking and such) tend to be over-built when compared to the professional job. The home builder tends to put more nails/glue/braces in place than are required to fulfill the need of the object. I doubt this holds true in software development, but it may.

The Wrong Analogy (1)

lheal (86013) | more than 7 years ago | (#17910420)

Is a homemade door more or less secure than a professionally installed door? My answer is "it depends on the skills of those involved and the quality of materials".

The real issue is whether the house to which that door allows access is more secure if you publish its plans or not.

That is hard to answer, because you don't know if the homeowner is relying on the secrecy for security, or just wants to sell house plans. If the homeowner thinks his house is safer because no one can open his door without the plans, then he is trusting in security through obscurity.

An inherent flaw in these physical analogies is that they subconsciously tie us to the physical properties of the analogs. Who wants to be replacing doors all the time? I'd rather build it once and not tell anyone how many shims I had to use. But software is generally easier to fix than is a door, relative to their environments, or it should be.

Re:The Wrong Analogy (1)

Salsaman (141471) | more than 7 years ago | (#17910558)

Your analogy is also wrong !

A better one would be, is your house more secure if you publish the blueprints and photos of it online, and allow any architect or security specialist in the world to view them, suggest changes, and if you like the suggestions, they will come to your house and carry out the work for you (often for free).

On the other hand, every thief in the world can also study those blueprints and photos...

Are oranges more wholehearted than Hondas? (1)

Beryllium Sphere(tm) (193358) | more than 7 years ago | (#17912748)

Yes. Not only is the wrong question, it doesn't even make sense.

Open source and closed source are methods, security is a result. Security is an attribute of a product, not of a development technique. A closed-source company can assign a hundred reviewers and get more trained eyeballs on their code than most open source projects ever see.

If you want to measure results, there's so much scatter from other causes that any effects of open vs. closed are swamped in the noise. Which would you pick as an example of open source security -- OpenBSD, or sendmail? Which would you pick as an example of closed source security -- VMS, or Internet Explorer?

If you've made all your other security-related decisions and then decide whether to publish source code, one thing to consider is how motivated the attackers are. The crypto community is wedded to published algorithms because they have to face attackers with national budgets who can hire people like Alan Turing. Vertical market software for running the environmental controls in a chicken coop doesn't need, and won't get, worldwide peer review.

There's also the design-to-failure argument. Secrecy is fragile and temporary, and repairs are difficult when it's lost.

Security by Obscurity (3, Interesting)

$RANDOMLUSER (804576) | more than 7 years ago | (#17909964)

Is always a good first line of defense. At least it keeps out the riff-raff. Until someone smarter writes the scripts for them.

Re:Security by Obscurity (0)

Anonymous Coward | more than 7 years ago | (#17913164)

G0t Assembly?

I have a pre-canned explanation of open vs closed (4, Insightful)

Rosco P. Coltrane (209368) | more than 7 years ago | (#17910062)

Closed security: the Titanic is unsinkable - White Star line
Open security: the Titanic's hull is made of brittle metal and thus isn't safe - Independent safety inspector

*applause* (1)

RAMMS+EIN (578166) | more than 7 years ago | (#17910734)

That was very insightful. Thanks!

open how? (1)

Anonymous Coward | more than 7 years ago | (#17910072)

Open to the point of letting people know one's password?

I don't think that works.

Algorithm? May be.

It comes down to this, from bad guys .. it should be as "closed" as possible. From good guys, it should be as "open as possible". Because the good guys are likely to tell you flaws in the system, whereas bad guys aren't.

As a symptom of society in general to become more and more suspicious of each other, what is getting adopted is the worst of both the closed and open model is the one that persecutes security researchers (good guys) for finding vulnerabilities. Furthermore, it is fast becoming a crime to warn your friends that a particular software may be easily compromisable.

Reverse engineering must be legal. Warning people of vulnerabilities must be legal.

My Take (4, Interesting)

RAMMS+EIN (578166) | more than 7 years ago | (#17910134)

The same old argument for openness applies to open source. You have to assume the black hats will find and try to exploit vulnerabilities. Without that assumption, there isn't much to worry about. But given that the black hats will find vulnerabilities and use them, the best thing we can do is to make sure the white hats find the vulnerabilities, too. This way, the vulnerabilities can be fixed or worked around (e.g. through firewalls). The vulnerabilities exist whether or not you know about them, but, if you know about them, you can take adequate measures. Open source makes it easier to find vulnerabilities, and thus, to know about vulnerabilities.

Of course, open source also makes it easier for the black hats to find the vulnerabilities. So there's an arms race here. If the black hats find the vulnerability first, they can exploit it before it gets patched or worked around. If the white hats find it first, it can be fixed or worked around before it is exploited. The same arms race exists for closed source and open source, but, in the case of closed source software, the developers are (supposedly) the only ones with the source code, which gives them a slight edge in the arms race.

So it seems that both open source and closed source have advantages and disadvantages when it comes to security. Furthermore, I think that both arguments are theoretical, and the advantages that both models have are not always exploited. Having the source available does not help if no white hats are actually auditing it. And this is why open source wins, in my book. With open source, if you're concerned about vulnerabilities in the software and don't trust the rest of the world to have done proper audits and notified you about the results, you can do your own audit. If the developers of the software don't fix the vulnerabilities to your satisfaction, you can do so yourself. With closed source, you are at the mercy of the vendor. If they don't do proper audits, you're out of luck. If they don't fix vulnerabilities, you're out of luck.

Proprietary software vendors do not always have your best interests in mind. It's not unusual for vendors to keep silent about vulnerabilities found and/or fixed in their software, and some vendors have even threatened or sued people who have disclosed vulnerabilities in the vendor's software. The reputation is more important than the _actual_ security of the product, because the actual security is unknowable. With open source, such tacticts don't work. The source is out there, anyone can find the vulnerabilties and assess the security for themselves. If things are fixed, anyone can make a diff between the two versions and see what was fixed. They can't keep the information from you. Your security benefits from that.

You have to take out the weakest link (1)

Bullfish (858648) | more than 7 years ago | (#17910190)

Which, regardless of operating system, is the interface unit located between the chair and keyboard. That interface can bring the most secure system to its knees.

security through obscurity just another layer (4, Insightful)

straponego (521991) | more than 7 years ago | (#17910480)

Okay, let's look at just one service, SSH. Without security through obscurity, you can do things like keep OpenSSH patched, use very good passwords, disallow root logins, restrict logins to certain users (which is kinda security through obscurity, but...)

And on servers I run like that, I have yet to have a breakin, but I do get up to thousands of connection attempts from ssh worms, from the same servers, every day (well, they would if I stopped dropping them in iptables, but nevermind that). So it's possible that they could hit a user with a bad password, or one they got from another compromised machine.

On other boxes, like my home box, I put SSH on a high-numbered port. In a couple of years I've had zero attempts hit that port. It would be quite stupid to rely only on this trick, ignoring good discipline in other areas. But as a supplementary layer, it's quite useful. If nothing else, it saves bandwidth.

It's not sufficient, but it's not inherently bad.

Re:security through obscurity just another layer (2, Informative)

init100 (915886) | more than 7 years ago | (#17912570)

Without security through obscurity, you can do things like keep OpenSSH patched, use very good passwords, disallow root logins, restrict logins to certain users

Not to mention disable password logins altogether, and only allow logins using a key pair (known as public key authentication in SSH terminology). This makes a password guessing attack impossible, and an attacker must either guess (or obtain in another way) your private key, or find a security vulnerability in the software itself. This approach is somewhat more cumbersome to administrate though, but very secure.

Re:security through obscurity just another layer (1)

straponego (521991) | more than 7 years ago | (#17912968)

Absolutely, using ssh keys is a great option in many cases. But it carries its own risks, for example if steals your laptop or has root, or access to your account (don't forget to screenlock your machine whenever you step out of sight of it), on a machine which contains your keys.


Security can really be a PITA sometimes.

Re:security through obscurity just another layer (1)

init100 (915886) | more than 7 years ago | (#17918858)

But it carries its own risks, for example if steals your laptop

The key is protected by a good passphrase

don't forget to screenlock your machine whenever you step out of sight of it

I already do, since I started using *nix in 1998.

Re:security through obscurity just another layer (1)

Atlantis-Rising (857278) | more than 7 years ago | (#17915672)

Both options rely on security through obscurity, however. Why else would you be trying to hide your private key?

The only difference anywhere is how abstract your obscurity is.

Re:security through obscurity just another layer (0)

Anonymous Coward | more than 7 years ago | (#17913106)

Like 443?

Re:security through obscurity just another layer (1)

straponego (521991) | more than 7 years ago | (#17913210)

SSH on 80 and 443 are handy for getting to your box from firewalled sites. Some governmentish sites actually allow telnet and ftp but not ssh. But nah, I was thinking of 19831. Nobody ever guesses that one. It's my little secret.

...crap!

Re:security through obscurity just another layer (1)

KozmoStevnNaut (630146) | more than 7 years ago | (#17917584)

Port 80 gets lots of interesting requests, mostly for phpBB vulnerabilities, apparently...

Port 443 only gets the occasional "Bad protocol version identification '\026\003'", which logically is a HTTPS request.

443 has the added bonus that traffic through it is expected to be encrypted, so perhaps SSH won't raise as much as a red flag as on port 80.

sploit!=patch (1)

Watson Ladd (955755) | more than 7 years ago | (#17910732)

It's easy to find a buffer overflow by looking for strcpy calls with a debugger in a closed source program. It's a lot harder to fix them in a closed source program though, as you have no idea what to fix. The attacker doesn't need to understand the program to attack it, he just needs to understand a small part of it. The defender needs to understand all of it to patch it. Look at the CTSS bug involving a race condition and the system editor. The attacker just waited, and then he got the password file. Finding out what had happened was a lot harder.

My light fixtures are safe, really, trust me. (1)

140Mandak262Jamuna (970587) | more than 7 years ago | (#17910852)

Say, I make these light fixtures. You can screw in a bulb but you cant see the insides, its design, how close the leads come togather, the quality of the materials used, the quality of workmanship etc. No independant certifying agency like Underwriter's Laborataries has seen it. No consumer advocacy group has tested it. But I state solemnly that "to the best of my knowledge and belief, it is safe". All my employees in the Quality Assurance Department, (whose job depends on my ability to sell this gizmo) state sincerely that this product is safe. This should be enough right?

Why is it that people are debating closed versus open software?

Re:My light fixtures are safe, really, trust me. (2, Informative)

ninja_assault_kitten (883141) | more than 7 years ago | (#17910950)

Closed doesn't mean nobody has seen it. MS for example gives it source code to many 3rd parties for review and analysis. If source code is subject to extensive 3rd party review, closing it to the general public adds an additional layer of security. Security through Obsurity may not be a great stand alone security model, but as part of security indepth it can be. It should be used as one of many layers.

Re:My light fixtures are safe, really, trust me. (1)

140Mandak262Jamuna (970587) | more than 7 years ago | (#17911046)

True. I agree with your point. But submitting the source for independant thirdparty analysis and certification should be mandatory, like it is in child car seats and light fixtures. That is my point. May be I did not say it right.

Re:My light fixtures are safe, really, trust me. (1)

nasch (598556) | more than 7 years ago | (#17921790)

But submitting the source for independant thirdparty analysis and certification should be mandatory, like it is in child car seats and light fixtures.
Why? Those certifications are to make sure the product is safe - it won't burn your house down, or it will keep your child safe in an accident. They don't test anything else, such as whether the light fixture is attractive or shows dirt, or if the car seat is easy to use or comfortable. If my software needs to be safe (I could get hurt if it malfunctions) then I agree, it should be certified. Personally, I don't run any software like that, and I don't want to pay an extra $50 for it because the developer passes on the mandatory certification cost to me. Power plant software? Flight control? Sure. Tetris? Word processor? No thank you.

3rd parties like the Chinese government (1)

r00t (33219) | more than 7 years ago | (#17917492)

In court, Microsoft claimed that exposing their source would endanger national security.

A couple years later, after the trial was over, Microsoft gives in to Chinese government demands for the source code.

You really think that this kind of 3rd-party review is good? Hint: it is highly unlikely that the Chinese government would report any interesting discoveries back to Microsoft.

Re:My light fixtures are safe, really, trust me. (2, Informative)

TheLink (130905) | more than 7 years ago | (#17916030)

In my experience there is no big difference between the security of closed and open software.

1) Even if the source code is available for people to check, if nobody else bothers checking but the author there's no difference right?
2) It's the quality of the checking not the quantity. A billion stupid monkeys won't know the difference between good code or bad code.

What you should do is see who made the stuff and what their track record is like.

I can confidently say Firefox will continue to have regular security bugs for years, and that any claims that it is far more secure than IE are hype. The fact that it is written in an unsafe language and crashes regularly means it has both code quality issues and security issues. Don't even need to look at the source to tell.

It seems as if that there are fewer than 10 people in the world who know how to code safely in C (or C++) AND actually do it.

I'm definitely not one of them.

You1 fail it (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17911450)

Join GNNA (GAY to get some eye GNAA (GAY NIGGER whether to repeat from the FreeBSD been the best,

What is UNSECURE (1)

davidwr (791652) | more than 7 years ago | (#17911544)

If you have a large project that only the developers and the bad guys bother to examine closely, it's LESS secure than a similar project with many white-hat eyeballs on it and LESS secure than a similar project with only the developers looking at it.

This assumes the code has security-related bugs that are exploitable if found by the bad guys. It also assumes that the development team, despite their best efforts, doesn't find all the bugs that the bad guys could find if they had access to the source code.

Without the source code, the bad guys can find and exploit bugs, but their job is a lot harder.

Here's an example of how this might come into play in the real world:

I develop a set of cgi scripts to support e-Commerce. I publish them as open-source. They don't get very popular and I'm pretty much the only one using them. A well-known company uses the scripts and some black hats recognize my scripts. They study the source code, find an exploit, and harm the company that's using my scripts.

If the scripts were kept closed-source OR they'd become popular enough to have widespread community bug-squashing, the risk to the end-user would be much less.

Re:What is UNSECURE (0)

Anonymous Coward | more than 7 years ago | (#17914760)

1) If the company got your scripts they'd need to be open-source. If they were closed-source, chances are that because of your lack of popularity you'd already be out of business and the company would never have gotten to use your scripts.

2) Your example of obscure OSS getting exploited just demonstrates that everything you need to know you learned in Kindergarten: when going out into the world, it's best to hold hands (use tested-and-true stuff) and look both ways before crossing the street (why weren't the scripts run in a jailed environment to isolate any misbehavior? Why weren't the transactions ordered by the script sanity-checked by the bank? My bank calls me if I have any transaction greater than X dollars or Y number of transactions over Z amount of time.)

I don't think you checked YOUR arguments for security holes, personally.

Software Engineering is a young discipline? (1)

chrism238 (657741) | more than 7 years ago | (#17912718)

The article concludes that "Software Engineering is a young discipline". The term was first coined in 1961, so I'd like to suggest that only recently have many agreed on what software engineering actually is, and how it should be undertaken.

Open security has to be more secure (4, Insightful)

192939495969798999 (58312) | more than 7 years ago | (#17913338)

If you can't prove it is secure by showing me how it works, then it's not secure. How do I know that there isn't some bolt in the back of the bank vault, or some skeleton key, unless you allow me to inspect it myself?

Security by faith or by fact, which would you prefer?

The strength of open source is also it's weakness (0)

Anonymous Coward | more than 7 years ago | (#17913708)

The strength of open source software tends to be it's weakness when it comes to security. The problem with OSS is that successful projects tend to grow at an alarming rate with new features being added by the minute (ok... a bit of exaggeration here) without the full ramifications of each new feature being fully evaluated and understood. That is the very enemy of software security.
When was the last time that YOU did a security audit of the Linux kernel, of Apache or of MySQL? Even if you wanted to do a full audit of one of these projects, by the time you would be finished the audit, new features would have been added, taking away some of the relevance of your audit.
On the other hand, in a closed source environment like in most big software houses, new features are added at a slower rate due to the layers and layers of management that usually have to be traversed before the new feature is finally approved to be included into the next release. This gives code reviewers a bit more time to evaluate these new features as well as determine their impact on the overall security of the product. Furthermore, large corporations do tend to take security a bit more seriously these days and are more likely to include the security team at each phase of the development because CEOs have finally understood that a security vulnerability in one of their wares can undermine the entire reputation of their product and company. This is especially applicable to startup companies.

The bottom line is: Sure, small OSS projects are easier to audit as they are small in code size and new functionalities are added at a more reasonable rate. However, when the project tends to have larger pool of contributors, some of which not being educated in software security, things can get messy pretty quickly.

Who writes it? For whom? (1)

turing_m (1030530) | more than 7 years ago | (#17914052)

If I can't see the code myself, I am forced to trust that the vendor has refrained from inserting a backdoor in the code. As for third party audits, I trust them as much as I would trust Microsoft to hire an impartial third party to determine whether a new Office version actually increases productivity.

I don't care how many pictures of keys, keyholes, locks, policemen, security guards, castles, gates or agents in glasses the website hawking the product has, how high it ranks on cnet, how many recommendations it gets by editorial staff in magazines, or how many times superlatives ("military grade", "256 bit", "tinfoil hat", "for the ultra-paranoid"), are used in conjunction with the word "security" in a review or the product description. IF I CAN'T SEE THE CODE, I DON'T TRUST THE APPLICATION. PERIOD.

The next level above that is code that I can see - typically open source. At least then it is theoretically possible that someone could get caught inserting a backdoor, with resulting impact on their reputation. Compiling it yourself should be more secure than using something compiled by someone else. One should also consider who is writing it, and who has provided funds to write it. Should I trust them?

Above that is open source code that someone I trust has audited or written.

And above all is code that I have personally written.

Obviously there are trade-offs to be made (usually the only software available to me for my budget is either commercial or open source), but that's how I do the ranking.

Maybe it's time to re-read the classic "Reflections on Trusting Trust". http://www.acm.org/classics/sep95/ [acm.org]

Security = obscurity (1)

RomulusNR (29439) | more than 7 years ago | (#17914818)

All security *is* obscurity.

Just as all humans are ultimately cellular organisms, or all substances are ultimately subatomic particles. Security is the art of keeping something hidden by requiring something else that is hidden to reveal it, and repeated applications of this principle in various distinguishable implementations.

The lock on a door is only as secure as the secret of where it's key is. Discover this secret, and act upon it, and the secret of the door is revealed.

Likewise, my encrypted email is only as secure as the secret of the contents of my secret key (which is only as secure as my login), and my passphrase.

Even a biometrically secured system is only as secure as the secret of where the user's body is and how to get it to the scanner.

I used to join in on the laughter of "security through obscurity". Then I realized how much of security really is just obscurity, and how it was often not much less practically effective than "real" security. Then I saw that this is because they are ultimately the same, merely in various complexities.

Re:Security = obscurity (2, Funny)

TheMeuge (645043) | more than 7 years ago | (#17916570)

I think you've introduced a new concept here - security through incomprehensibility.

closed source graphs (1)

SaberTaylor (150915) | more than 7 years ago | (#17919306)

The text labels on those graphs are illegible.

password are another kind of obscurity (1)

Walter Carver (973233) | more than 7 years ago | (#17919638)

for example, passwords are the perfect example of "acceptable" security through obscurity: they are useful only if the attacker doesn't know them.
The password is the data. The data can and should be remain closed. When we talk about security through obscurity we refer to the procedure, which is the executable code, the algorithm, what the hell the software does and how it does it.

I think that beeing dependent on the software vendor beats any advantage (if there are any) that closed-source may have.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>