Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

OpenSSL: the New Face of Technology Monoculture

Soulskill posted about 6 months ago | from the relied-upon-to-a-fault dept.

Security 113

chicksdaddy writes: "In a now-famous 2003 essay, 'Cyberinsecurity: The Cost of Monopoly,' Dr. Dan Geer argued, persuasively, that Microsoft's operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond's monopoly, or at least to lessen Microsoft's ability to 'lock in' customers and limit choice. The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer. These days Geer is the Chief Security Officer at In-Q-Tel, the CIA's venture capital arm. But he's no less vigilant of the dangers of software monocultures. In a post at the Lawfare blog, Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn't proprietary software managed by Redmond, however, it's common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed. 'The critical infrastructure's monoculture question was once centered on Microsoft Windows,' he writes. 'No more. The critical infrastructure's monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them.'"

Sorry! There are no comments related to the filter you selected.

Is anyone surprised? (5, Informative)

TWX (665546) | about 6 months ago | (#46828421)

We already established that often corporations will use free software because of the cost, not because they're enthusiasts, and often those that are enthusiasts for a given project are specifically interested in that project only, not in other projects that support that project.

Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.

Re:Is anyone surprised? (1, Funny)

Anonymous Coward | about 6 months ago | (#46828467)

We already established that often corporations will use free software because of the cost, not because they're enthusiasts, and often those that are enthusiasts for a given project are specifically interested in that project only, not in other projects that support that project.

Besides, it's disingenuous to claim that no one knew that there were potential problems, the OpenBSD people were not exactly quiet about their complaints about OpenSSL. Of course, rather than considering their complaints on their merits, they were ignored until it blew wide open.

B-b-b-b-but the many eyes of open source makes all bugs shallow.

Re:Is anyone surprised? (-1)

Anonymous Coward | about 6 months ago | (#46828811)

B-b-b-b-but the many eyes of open source makes all bugs shallow.

The eyes are busy looking at porn online.

Heartbleed was very shallow, fixed as soon as iden (5, Interesting)

raymorris (2726007) | about 7 months ago | (#46829003)

I guess you're not a programmer, and therefore don't know what a shallow bug is. Conveniently, the rest of the sentence you alluded to explains the term:

"Given enough eyeballs, all bugs are shallow ... the fix will be obvious to someone."

If you have to dig deep into the code to figure out what's causing the problem and how to fix it, that's a deep bug. A bug that doesn't require digging is shallow. Heartbleed was fixed in minutes or hours after the symptom was noticed - a very shallow bug indeed. "The fix will be obvious to someone."

The presence or absence of bugs is an orthogonal question. That's closely correlated with the code review and testing process - how many people have to examine and sign off on the code before it's committed, and if there is a full suite of automated unit tests.

The proprietary code I write is only seen by me. Some GPL code I write also doesn't get proper peer review, but most of it is reviewed by at least three people, and often several others look at it and comment. For Moodle, for example, I post code I'm happy with. I post it with unit tests which test the possible inputs and verify that each function does its job. Then anyone interested in the topic looks at my code and comments, typically 2-4 people. I post revisions and after no-one has any complaints it enters official peer review. At that stage, a designated programmer familiar with that section of the code examines it, suggests changes, and eventually signs off on it when we're both satisfied that it's correct. Then it goes to the tester. After that, the integration team. Moodle doesn't get very many new bugs because of this quality control process. That's independent of how easily bugs are fixed, how shallow they are, depending on how many people are trying to fix the bug.

Re:Heartbleed was very shallow, fixed as soon as i (0)

Anonymous Coward | about 7 months ago | (#46830327)

The programmers here claim so few bugs that I'm astounded that software has any remaining bugs at all.

Re:Is anyone surprised? (3, Interesting)

TheRaven64 (641858) | about 7 months ago | (#46830969)

OpenSSL is quite shockingly bad code. We often use it as a test case for analysis tools, because if you can trace the execution flow in OpenSSL enough to do something useful, then you can do pretty much anything. Everything is accessed via so many layers of indirection that it's almost impossible to statically work out what the code flow is. It also uses a crazy tri-state return pattern, where (I think - I've possibly misremembered the exact mapping) a positive value indicates success, zero indicates failure, and negative indicates unusual failure, so people often do == 0 to check for error and are then vulnerable. The core APIs provide the building blocks of common tasks, but no high-level abstractions of the things that people actually want to do, so anyone using it directly is likely to have problems (e.g. it doesn't do certificate verification automatically).

The API is widely cited in API security papers as an example of something that could have been intentionally designed to cause users to introduce vulnerabilities. The problem is that the core crypto routines are well written and audited and no one wants to rewrite them, because the odds of getting them wrong are very high. The real need is to rip them out and put them in a new library with a new API. Apple did this with CommonCrypto and the new wrapper framework whose name escapes me (it integrates nicely with libdispatch), but unfortunately they managed to add some of their own bugs...

Re:Is anyone surprised? (5, Insightful)

Opportunist (166417) | about 7 months ago | (#46831169)

OpenSSL is one great example for what I dubbed "Monkey Island Cannibal security" in my talks (yes, believe it or not, you can actually entertain and inform managers that way, you'd be surprised how many played MI, and even if not that's at least something they can understand). But that whole Monkey Island spiel works as a perfect example for security blunders where one point gets improved over and over because everyone thinks that's the only point it could fail while the rest of the security system gets neglected even though the security problem is obviously there.

For those who don't know MI (or who forgot), there is a moment in Monkey Island where the cannibals catch your figure and lock him up in a hut. You can escape that hut via a loose panel in the wall. Now, every time the cannibals catch you again, the door of the hut gets more and more elaborate and secure, to the point where that bamboo hut has a code lock reinforced steel door befitting a high security vault in the end. Which of course has no effect on your chances to escape since you never pass that door (at least on your way out).

The point is that the cannibals, much like a lot of security managers, only look at a single point in their security system and immediately assume that, since this is their way of entering the hut, it must also be the point where you escape. Likewise, the focus on auditing OpenSSL lies always on the crypto routine, and you may assume with good reason that this is one of the most audited pieces of code in existence.

Sadly, the "hut" around it is less well audited and tested. And that's where the problems reside.

Re:Is anyone surprised? (1)

Bengie (1121981) | about 7 months ago | (#46831781)

B-b-b-b-but the many eyes of open source makes all bugs shallow.

Everyone who attempted to read OpenSSL quickly lost their ability to see and they gouged out their eyes from the pain. OpenSSL is what you call obscurified code.

The bad guys don't have time to master second tier (2, Insightful)

Anonymous Coward | about 6 months ago | (#46828493)

But the rest of us do!

It's a silly argument. Put your eggs in one basket... then guard the basket. 2-3 FT developers doesn't cut it when there are so many attackers and the motivation is much greater than bragging rights at def con.

True dat (beta sucks) (-1)

Anonymous Coward | about 7 months ago | (#46830017)

especially when many of the attackers are NSA employees. Oh, and some of the openssl developers are also NSA employees.

Might as well pay Michael Jackson to put his mouth on your kid's cock. You know, so nobody else can sodomize him.

Re:Is anyone surprised? (4, Insightful)

Xylantiel (177496) | about 7 months ago | (#46829699)

I would say it wasn't just OpenBSD either -- it appears that everyone was very reluctant to update from 0.9 to newer versions. This tells me that people knew the development practices weren't up to snuff. It's just too bad that it took such a major exploit to kick everyone in the head and get them to put proper development practices in place for OpenSSL. Many eyes don't work if everyone is intentionally holding their nose and looking the other way.

Re:Is anyone surprised? (1)

Antique Geekmeister (740220) | about 7 months ago | (#46831157)

> it appears that everyone is reluctant to updates anything, ever

Fixed That For You.

You don't touch core, production libraries if you don't have to for stable code. And new features, enhancements, or portability often hurt the eize and performance of otherwise stable code.

Re:Is anyone surprised? (1)

Xylantiel (177496) | about 7 months ago | (#46832063)

Well I would say that is just evidence of the problem. If update adversely impacts stability that badly then updates are not being managed/tested properly, which is exactly the problem with OpenSSL. This also brings up another point -- a lot of the stability problems are due to interaction with various other (broken or oddly-functioning) SSL implementations. The correct way to handle that is with rigourous and extensive test cases, not just closing your eyes and not updating.

Re:Is anyone surprised? (0)

Anonymous Coward | about 7 months ago | (#46832543)

systemd?

1st one (-1)

Anonymous Coward | about 6 months ago | (#46828431)

touc base

1st one ever (1)

Anonymous Coward | about 6 months ago | (#46828445)

pleese see this good article related :http://www.networkworld.com/weblogs/security/003879.html

OSS vs Reality (5, Insightful)

Ralph Wiggam (22354) | about 6 months ago | (#46828447)

In theory (the way OSS evangelists tell you) as a software package gets more popular, it gets reviewed by more and more people of greater and greater competency. The number of people using OSS packages has exploded in the past 10 years, but the number of people writing and reviewing the code involved doesn't seem to have changed much.

Re:OSS vs Reality (-1, Troll)

Anonymous Coward | about 6 months ago | (#46828503)

Still a damn sight better than proprietary vs reality.

Closed and open are equivalent ... (3, Informative)

perpenso (1613749) | about 6 months ago | (#46828789)

With respect to the discovery of heartbleed closed and open are equivalent. The bug was found by testing the binary not by eyes on source code.

That said, proprietary code can be open too. Some proprietary libraries are available with a source license option. You may have to ask, their ads don't necessary mention the source license option. It confuses some readers.

Re:Closed and open are equivalent ... (0)

Anonymous Coward | about 7 months ago | (#46828973)

Yeah, no one tested it with the source before going against the binaries.

Are you fucking high?

Re:Closed and open are equivalent ... (0)

Anonymous Coward | about 7 months ago | (#46829107)

Um, that's exactly what happened.

Re:Closed and open are equivalent ... (4, Informative)

perpenso (1613749) | about 7 months ago | (#46829493)

Yeah, no one tested it with the source before going against the binaries. Are you fucking high?

No, I merely read the account written by the folks who found heartbleed. It was automated testing of a live system. Closed or open source happens to be irrelevant for this particular discovery.

"“We developed a product called Safeguard, which automatically tests things like encryption and authentication,” Chartier said. “We started testing the product on our own infrastructure, which uses Open SSL. And that’s how we found the bug.”"
http://readwrite.com/2014/04/1... [readwrite.com]

Re:Closed and open are equivalent ... (2)

Bengie (1121981) | about 7 months ago | (#46831801)

Two different companies found and patched HearthBleed with in 1 day of each other, without any contact between the two companies. Others speculate that the two companies were investigating a security breach.

Really, what is the chance that two independent companies with no interactions manage to find the same 2 year old bug with in 24 hours of each other?

Re:Closed and open are equivalent ... (1)

greg1104 (461138) | about 7 months ago | (#46830123)

That said, proprietary code can be open too.

No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context. To quote the OSI [opensource.org] , "Open source software is software that can be freely used, changed, and shared (in modified or unmodified form) by anyone". The essential missing part here is that sharing must be allowed, and the sort of commercial arrangements that get you source to proprietary code don't allow that.

Proprietary software that makes source available to customers has some of the properties of free software. But since it can't satisfy all of the GNU project's four freedoms [gnu.org] , it's not appropriate to refer to those products as free software either.

Re:Closed and open are equivalent ... (1)

perpenso (1613749) | about 7 months ago | (#46830685)

That said, proprietary code can be open too.

No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context.

It absolutely can be open. You can retain full ownership and control of your source code and still let your users have access it to it.

Sorry, hit submit not continue ... (1)

perpenso (1613749) | about 7 months ago | (#46830703)

Sorry, I clicked "submit" when I meant "continue editing" ...

That said, proprietary code can be open too.

No, it can't, by definition. If it's not available to everyone, then it's not "open" in this context.

It absolutely can be open. You can retain full ownership and control of your source code and still let your users have access it to it.

To quote the OSI [opensource.org], "Open source software is ... GNU project's four freedoms [gnu.org] ...

Good thing I didn't say proprietary software is FOSS, mere that it can be open. Sorry but OSI and GNU don't get to redefine the word open.

And you don't get to move the goal post. This discussion is about inspecting source code for bugs. And in this sense proprietary can be as open as FOSS.

Re:Closed and open are equivalent ... (1)

Opportunist (166417) | about 7 months ago | (#46831189)

In my experience, the main difference between open and closed source is the NDAs I'm bound with. Or rather, the effects such an NDA can possibly have.

In a CSS audit, the NDA will invariably include "and do not hand over any kind of source, lest we kill your firstborn", or a variation thereof. If I find something, it depends on the company that ordered the audit whether or not that bug will be even admitted, let alone fixed, and whether that fix will be delivered to everyone or whether they leave it open deliberately because someone wants that "bug" to exist.

In OSS audits such NDAs are rare. Not only because there's little use in telling you not to publish the source code (it's open. Duh), but also because it's trivial for someone to break that NDA without ever being possibly caught. Anything I find can be found by anyone else. It's kinda hard to prove that I pointed you to it should you happen to stumble upon the same bug that I found during the audit but the company ordering the audit wanted to keep hushed up.

OSS is not by definition better secured. It can be if people care. Well, we learned that people don't. But one thing remains, it's way harder to hush things up. OSS isn't more secure because more people look at the code. It is because more people can do so and because you can't simply swipe under the rug what you don't want people to see in your code. The Streisand Effect can only work if people can look.

Re:OSS vs Reality (-1)

Anonymous Coward | about 6 months ago | (#46828533)

In contrast, closed source gets much more reviews when the user base grows. Oh, wait. No it doesn't.

Re:OSS vs Reality (1)

Ralph Wiggam (22354) | about 6 months ago | (#46828607)

That wasn't my point at all. Dan Greer wrote about the dangers of closed source monoculture a decade ago. You will find very little disagreement on this site.

He's now saying that closed source monoculture is bad. I'm saying that, in theory, it's not. If code was actually reviewed by "millions of eyes" as it got more popular, then we could be pretty confident that a package as widespread as OpenSSL would not contain an exploit as brutal as Hearthbleed. But in the current situation, where code is more likely to be reviewed by dozens of eyes, then Dr. Greer has a valid point.

Re:OSS vs Reality (1)

Ralph Wiggam (22354) | about 6 months ago | (#46828609)

>He's now saying that closed source monoculture is bad.

Doh. Open source, obviously.

Re:OSS vs Reality (1)

gl4ss (559668) | about 7 months ago | (#46831251)

monoculture in general is bad.

what openssl does should be so understood that there should be numerous libs that do the same things and numerous plugins in for openssl that do the same thing.

but I suppose there was "nobody ever got fired for choosing openssl" mentality too which was in effect. and it's still true, there haven't been any stories of anyone getting fired for using it.

Re:OSS vs Reality (2, Insightful)

Anonymous Coward | about 6 months ago | (#46828669)

We're reactive, not proactive - why look for problems if the software is already working?

This is why we missed Heartbleed, because there's no compelling reason to keep working once the product gets a green light. There never will be a compelling reason. The problem has no solution that doesn't involve throwing money at something that will never have a payoff...so we won't ever do it. People don't do things unless there's an observable negative to *not* doing them.

Re:OSS vs Reality (4, Insightful)

Ralph Wiggam (22354) | about 6 months ago | (#46828801)

That is the reality of the situation. In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects.

Re:OSS vs Reality (0)

turbidostato (878842) | about 7 months ago | (#46829241)

"In the fantasy land of OSS evangelists, thousands of highly skilled coders are constantly auditing big OSS projects."

Do you know what a strawman argument is, right?

But now, for a reality check: this bug, while serious, affected maybe a few thousands out of millions of users and once discovered it was fully disclosed, audited, peer reviewed and patched *because* it was on an open source environment.

Now, please, tell me you can say the same about other closed source products.

Re:OSS vs Reality (1)

Anonymous Coward | about 7 months ago | (#46829353)

Apples GOTO FAIL whilst not patched universally; was dealt with extremely quickly on Apples primary product lines.

I would actually equate the handling of both problems, but one is closed source the other open. (Since heartbleed wasn't made public for several weeks)

"Now, please, tell me you can say the same about other closed source products."

Yes, we can.

Re:OSS vs Reality (0)

Anonymous Coward | about 7 months ago | (#46830699)

Except the Apple "goto fail" bug was in an *open source* Apple component, which is why it was discovered and fixed quickly:

http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c?txt

Proprietary software with closed source tend to sweep bugs under the carpet instead of fixing them. That's painfully obvious after using and developing for proprietary software.

Re:OSS vs Reality (1)

bloodhawk (813939) | about 7 months ago | (#46830891)

The reality is the pool of people competent and knowledgeable enough to review such code from a security perspective is very small, increasing the user base does little or nothing to increase how many people are reviewing the code. This particular bug could have been caught by anyone with even a passing knowledge of security, but most developers wouldn't even know where to start with how to review such code.

Re:OSS vs Reality (0)

Anonymous Coward | about 7 months ago | (#46831303)

OK, let us compare to a closed source SSL package, since you are alluding to saying it is better.

That closed source package not only has its own version of this very exploit still present, but a few thousand other exploits that range from equally bad to much worse.

This is based on the grand total of ZERO lines of code that we have verified as bug-free.

If you wish to claim otherwise, you will need to present at least one line of code for others to verify as bug free, and since the source is closed you simply can't provide that evidence.

The only sensible assumption to make is that every last line of code is an exploit waiting to happen until proven otherwise.

So the fact that this exploit WAS found (as have others, but let's ignore those for now) shows a total score of OSS 1, closed source 0.
Of course the OSS score is higher than 1, but the zero is in fact a zero, so best or worse case OSS still wins.

Again, if you wish to claim otherwise, you need to provide proof (that you don't have)

Apples and oranges (5, Insightful)

Grishnakh (216268) | about 6 months ago | (#46828469)

With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.

None of this happens with proprietary software. First off, the vendor always tries to deny the problem or cover it up. If and when they do fix it, it may or may not be really fixed. You don't know, because it's all closed-source. It might be a half-ass fix, or it might have a different backdoor inserted, as was recently revealed with Netgear. What if you think the fix is poor? Can you fork it and make your own that's better? No, because you can't fork closed-source software (and certainly not selected libraries inside a larger closed-source software package; they're monolithic). But the LibreSSL guys did just that in the Heartbleed case.

Finally, monocultures aren't all that common in open-source software anyway; they only happen when everyone generally agrees on something and/or likes something well enough to not bother with forks or alternatives. Even the vaunted Linux kernel isn't a monoculture, as there's still lots of people using the *BSD kernels/OSes (though granted, there's far more installations of the Linux kernel than the *BSDs).

Re:Apples and oranges (0)

Anonymous Coward | about 6 months ago | (#46828633)

With open-source software, a monoculture isn't that bad a thing,...

IMO, all your post has done is explain why an OSS monoculture isn't as bad as a closed-source monoculture.

Re:Apples and oranges (1)

Grishnakh (216268) | about 6 months ago | (#46828799)

Yes, I guess you could say that, but I'd add the qualifier "nearly", or maybe even "remotely".

Re:Apples and oranges (1)

ILongForDarkness (1134931) | about 6 months ago | (#46828897)

Monocultures are a nature of the need for interop between orgs. Standards form because it is easy to confirm it will work, easy to find employees/volunteers that can use it, it solves a problem well and the opportunity cost of looking at alternatives likely will be more than any incremental improvement they offer etc. I agree FOSS is fantastic for turn around of fixes and being able to confirm the quality of the fix. Closed source can solve the problem but you might never now.

I think this calls for more monoculture: only build what is differentiated everything else should be common well understood and maintained components.

Re:Apples and oranges (0)

Anonymous Coward | about 7 months ago | (#46829087)

>When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add).

For high profile projects like OpenSSL, sure. For the other 90%, nope.

Re:Apples and oranges (1)

Grishnakh (216268) | about 7 months ago | (#46829115)

The other 90% doesn't run critical infrastructure services.

Re:Apples and oranges (1)

Opportunist (166417) | about 7 months ago | (#46831205)

Because about 100% of the people don't care about the other 90%?

If there's like 10 people who use a software product, it's also just those 10 people who give a shit whether there's a bug in it.

Re:Apples and oranges (1)

phantomfive (622387) | about 7 months ago | (#46829959)

With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix

The reason (a reason) monoculture is still bad with open source is that we don't know when this exploit was discovered. It may have been discovered long before, by malevolent entities, who didn't reveal it because they were exploiting it.

Re:Apples and oranges (1)

Guy Harris (3803) | about 7 months ago | (#46830287)

With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. When something bad is discovered, people jump on it immediately and come up with a fix, which is deployed very very quickly (and free of charge, I might add). How fast was a fix available for Heartbleed? Further, people will go to greater lengths to make sure it doesn't happen again. Look at the recent efforts to rewrite OpenSSL, and the fork that was created from it.

"It" in "it doesn't happen again" being "a monoculture"? If you have a monoculture, a fork destroys it unless a new monoculture forms from the fork (i.e., if the forked-from project loses most of its market share).

Re:Apples and oranges (1)

Grishnakh (216268) | about 7 months ago | (#46831617)

Not really. A fork isn't a completely different product; the two forks share a codebase, which is why the word "fork" is used instead of "rewrite". How much of a monoculture there is depends on how divergent the forks are. Iceweasel and Firefox, for instance, barely diverge at all, whereas X.org and XFree86 are very different at this point (but still not completely different, the core X code is still mostly the same I'm sure).

Re:Apples and oranges (1)

fluffy99 (870997) | about 7 months ago | (#46830379)

With open-source software, a monoculture isn't that bad a thing, as the Heartbleed exploit has shown. ... How fast was a fix available for Heartbleed?

Heartbleed showed that a monoculture, particularly one relying on poorly written and barely reviewed code is a bad thing. OSS or not. That the source code was fixed so easily just highlights to me how the heartbeat feature it was never properly reviewed or tested, and how people using openssl or incorporating it into their products never questioned it. The many eyes argument fails when you realize how few qualified programmers looked at the code. Given how wide spread openssl is, getting that fix rolled out to all the s/w and h/w that have it embedded is a nightmare. Just think of the Billions being spent to audit and test across enterprise networks, and update all that software.

Sure openssl will get more scrutiny for a while, but it doesn't fix the underlying fallacy that OSS automatically means quality code regardless of whether its commercial, free, or otherwise licensed. Or that OSS projects quite often have a shoestring budget, lower quality programmers, and less far less review than closed, proprietary software.

Re:Apples and oranges (3, Interesting)

plover (150551) | about 7 months ago | (#46830465)

I think the bigger problem is that everything about encryption software encourages a monoculture. Anyone who understands security will tell you "don't roll your own encryption code, you risk making a mistake." I would still rather have OpenSSL than Joe Schmoe's Encryption Library, simply because at this time I trust them a bit more. Just not as much as I did.

Another problem is that the "jump on it and fix it" approach is fine for servers and workstations. It's not so fine for embedded devices that can't easily be updated. I'm thinking door locks, motor controllers, alarm panels, car keys, etc. Look at all the furor over the hotel card key system a few years back, when some guy published "how to make an Arduino open any hotel door in the world in 0.23 seconds". Fixing those required replacing the circuit boards - how many broke hotels could afford to fix them, or even bothered to?

The existence of a "reference implementation" of security module means that any engineer would be seriously questioned for using anything else, and that leads to monoculture. And in that world, Proprietary or Open doesn't matter nearly as much as "embedded" vs "network updatable".

Re:Apples and oranges (1)

TheRaven64 (641858) | about 7 months ago | (#46831031)

The problems with OpenSSL aren't actually in the crypto parts. libcrypto is pretty solid, although the APIs could do with a bit of work. The real problems are in the higher layers. In the case of heartbleed, it was a higher-level protocol layered on top of SSL and implemented poorly. It was made worse by the hand-rolled allocator, which is also part of libssl (not libcrypto).

Re:Apples and oranges (2)

Lennie (16154) | about 7 months ago | (#46830973)

Let's have an other look at what happend:

Almost every vendor which included OpenSSL in their product jumped on this the first day.

Of the vendors Apple and VMWare were the slowest to respond to the Heartbleed bug, what does that tell you ?

Companies using OpenSSL should help out (3, Insightful)

Anonymous Coward | about 6 months ago | (#46828473)

I have been a bit surprised that all these companies using OpenSSL (Google, Yahoo, Facebook, etc) haven't ensured that this critical piece of technology is getting the support it needs to be done correctly.

What other technology that is critical are these same/dependent companies overlooking in their investment of dollars in Open Source software??

Will be interesting to see what happens going forward.

Re:Companies using OpenSSL should help out (2, Informative)

Anonymous Coward | about 6 months ago | (#46828613)

I have been a bit surprised that all these companies using OpenSSL (Google, Yahoo, Facebook, etc) haven't ensured that this critical piece of technology is getting the support it needs to be done correctly.

Google has made a great number of contributions to OpenSSL.

Re:Companies using OpenSSL should help out (1)

Virtucon (127420) | about 6 months ago | (#46828687)

That's the problem with most FOSS projects, people will use them but few will support with either time or finances.

Re:Companies using OpenSSL should help out (1)

VortexCortex (1117377) | about 7 months ago | (#46830655)

Once you realize that SSL is just a big expensive security theater that has never offered any security, [youtube.com] I wouldn't blame them for not giving a fuck about OpenSSL, or web security in general.

Re:Companies using OpenSSL should help out (1)

TheRaven64 (641858) | about 7 months ago | (#46831037)

If that's the take-home that you got from that video, I suggest you watch it again. You clearly missed the point.

"but... but... but..." (1)

bferrell (253291) | about 6 months ago | (#46828507)

It's a best practice... how can it be wrong?

Re:"but... but... but..." (1)

Opportunist (166417) | about 7 months ago | (#46831217)

Hush! Here, dump a few 1000 bucks on getting an ITIL certificate and you'll know why best practice can NEVER be wrong! NEVER!

Is it me or do certain IT certificates turn more and more into something akin to courses offered by a certain alien worshiping cult? You pay through the nose for courses of dubious quality so you need to sing their praise in the hope to get eventually at least the money out that you stuffed in...

Monoculture? At 17%? (1)

Anonymous Coward | about 6 months ago | (#46828535)

Maybe I'm missing something but since when is 17-20% market share (the estimates I've heard of the number of affected sites) a "monoculture"? Sure there were some biggies in there, but seems to me diversity worked pretty well in this case.

Re:Monoculture? At 17%? (2)

jrumney (197329) | about 6 months ago | (#46828905)

A large part of the low impact was older versions of OpenSSL from before the bug was introduced in the "stable" distributions of some widely used Linux distros.

17% of sites could be 100% of SSL sites (1)

raymorris (2726007) | about 7 months ago | (#46829037)

Most web sites have no need for SSL/TLS. Therefore, 17% of web sites could mean ALL "secure" sites were affected. OpenSSL might have 90% market share in the sense that 90% of SSL connections use OpenSSL and that could still be 17% of web sites.

Re:17% of sites could be 100% of SSL sites (0)

Anonymous Coward | about 7 months ago | (#46829093)

Instead of guessing, why don't you finally read the article this statistic came from [netcraft.com] ?

Our most recent SSL Survey found that the heartbeat extension was enabled on 17.5% of SSL sites, accounting for around half a million certificates issued by trusted certificate authorities.

Haha (0)

ArchieBunker (132337) | about 6 months ago | (#46828583)

Can't blame Microsoft for this or use the "many eyes" argument either.

Let me tell you ALL, how it really is (-1, Troll)

Anonymous Coward | about 6 months ago | (#46828585)

Open "Sores" Shit Layer screwups = same as those spouted for years here of "Windows != Secure, Linux = Secure" since FINALLY, Linux (Android) is showing the truth of all that /. bullshit. All those "eyes on the code" didn't mean shit (considering most users of Linux don't code, it makes total sense it failed here too). Now, I think it's utterly hilarious you all have to eat your words on that one, lol! You all are either too stupid, or too young, to know that once something becomes the most used on any platform, it will also be the most attacked. Criminals, are criminals. Online botnet masters/malware makers? No different. They don't target 'crowds of 1' & instead, go for the masses (crowded malls, streets, plus other throughfares where bigger numbers mean more victims).

Re:Let me tell you ALL, how it really is (1)

fisted (2295862) | about 6 months ago | (#46828653)

Uh, someone seems mad that they have to resort to the ready-for-granny kind of OS. Sweet.

I make Windows run pretty solid (-1)

Anonymous Coward | about 6 months ago | (#46828707)

... as well as secure too -> http://www.bing.com/search?q=%... [bing.com]

* :)

How? Easy as it gets & exists for *NIX variants too: It uses a HIGHLY ESTEEMED tool http://www.computerworld.com/s [computerworld.com] ... [computerworld.com]

(Whose makers have taken a few of MY suggestions to improve it no less)

CIS Tool actually makes it "fun" to do (in a nerdy kind of way) - almost like a performance benchmark software does, albeit, for security instead!

It works!

My uptime, until as of a couple weeks ago, was since 2009 when I first installed Windows 7 64-bit (after the above though, of course) circa 2009 - 2014.

APK

P.S.=> Nothing but the truth, & yes - it works. Any modern OS has facilities for making it "security-hardened" in minutes time really... & any of them are NOT nearly setup that way, outta the box/oem-stock...

... apk

Re:Let me tell you ALL, how it really is (0)

Anonymous Coward | about 6 months ago | (#46828861)

Penguins feathers ruffled by truth. Go figure. From +1 to 0 Insightful, to -1 Troll. If truth = troll something's amiss.

Security by Obscurity? (0)

Anonymous Coward | about 6 months ago | (#46828591)

It sounds like he is talking about security-by-obscurity and using something different from the norm is "better".

While I now (didn't previously till the heartbleed issue) think that OpenSSL should have been better maintained, I think it would be worse in general if we all rolled our own implementations (by this I mean, not just forking it or compiling it ourselves, I mean fully writing it from scratch so you know exactly how it is meant to work).

It kind of reminds me of that XKCD comic about standards in that we would end up with so many different implementations to "stay secure" that we might then try to make one implementation that acts like all of them.

This also seems similar to the argument people used for Macs saying they don't get viruses because they aren't used by enough people.

Re:Security by Obscurity? (1)

greg1104 (461138) | about 7 months ago | (#46830169)

Security by obscurity [wikipedia.org] means that the product is secure only when you don't have the source code. The idea is that parts of the security mechanism would be simple to break if only you could see how they are implemented. A simple example is a hardcoded backdoor password in the code. Very hard to just stumble on, trivial to find with source access. Ideally security mechanisms should work equally well whether or not you have their source code, which is security by design [wikipedia.org] .

This is a completely different concept from security by low market share.

Re:Security by Obscurity? (1)

TheRaven64 (641858) | about 7 months ago | (#46831057)

No, he's talking about mitigation, which is a well-known security practice. It's not about obscurity - you can have two or more open source implementations, but it's then harder for the same bug to be in both or all.

To give a concrete example, take a look at the DNS root zone servers operated by Verisign. They run a 50:50 mix of Linux and FreeBSD and increasingly a mix of BIND and Unbound. They use a userspace network stack on some and the system network stack on others. If someone wants to take out the root zone, they need to find exploits for each of these systems. A bug that lets you remotely crash a FreeBSD box likely won't affect Linux and vice versa. That gives them a little bit more time to find the fix (they also massively overprovision, so if someone does take out all of the Linux systems then the FreeBSD ones can still handle the load, and vice versa). If someone finds a bug in BIND then the Unbound servers will be fine.

If your web site were running a mixture of OpenSSL and something else, then it would be relatively easy to turn off the servers running OpenSSL as soon as the vulnerability is disclosed and only put them back online when they've been audited for compromises. Of course, it depends a bit on what your threat model is. If a single machine being compromised is a game-over problem, then you're better off with a monoculture (at your organisation, at least). If having all (or a large fraction) compromised is a problem, but individual compromises are fine, then it's better to have diversity.

Re:Security by Obscurity? (1)

Opportunist (166417) | about 7 months ago | (#46831243)

Security by obscurity is by definition a bad idea. But the conclusion "CSS == SbO" is false. CSS can rely on SbO, but there is no immediate causal link. What keeps you from writing software that is actually secure by design (which would constitute the opposite of SbO) but leave the source closed? Yes, it could be opened and published without endangering the security of your system, but you decide against it.

That's just as valid.

The fallacy stems maybe from the fact that SbO must be CSS (for the obvious reason that the security is broken the moment someone gets to see the code). SbO requires CSS. It does not mean that the reverse is true, too.

Recognize limitations of volunteer efforts (5, Insightful)

hessian (467078) | about 6 months ago | (#46828659)

I am not anti-volunteer; I spend a lot of my time volunteering.

But you need strong leadership.

Otherwise, everyone does what they want to, which leaves huge holes in the project.

Whether a piece of code is open source or closed source doesn't matter. The quality of the leadership of the team that produces it is vital in both cases.

Re:Recognize limitations of volunteer efforts (0)

Anonymous Coward | about 6 months ago | (#46828895)

I believe the debian project is entirely volunteer run and the organisation only takes care of the money and how to spend it, rather than setting goals on what has to be done when.
And debian and its derivatives are pretty popular.

Re:Recognize limitations of volunteer efforts (0)

Anonymous Coward | about 7 months ago | (#46828997)

But you need strong leadership.???

Strong leadership may get things done in the sort term but it allows damaging contradictions to prevail until they cause havoc on a very large scale. Two examples that spring to mind. Second World War and the Fist World War. Strong leaderships put Nations between a rock and a hard place. Set the world economy back 20 years and resulted in well over 50 Mega deaths.

Re:Recognize limitations of volunteer efforts (0)

Anonymous Coward | about 7 months ago | (#46829441)

Strong leadership may get things done in the sort term but it allows damaging contradictions to prevail...

This is the definition of weak leadership.

Strong leadership ensures that contradictions do not occur; because a single person is making decisions. Your world war examples are examples of weak leadership; where decisions were not in the hands of any single leader (due to treaties and pacts that limited their leadership choices).

Strong leadership is leadership where contradictions are decided decisively by a single person; there is no "contradiction" allowed to prevail, as soon as a contradiction is identified, a strong leader can resolve the entire contradiction by being the only one with the authority to resolve it.

Linus Torvalds is an example of this.

Weak leadership revolves around committees requiring extensive debate and a culture that attempts to acquiesce to everyone so that no one feels "hurt" by a decision. It is exactly this leadership style that allows contradictions to linger (because everyone must agree, no single decision can be entirely detrimental upon any actor - they would never agree to that!).

Weak leadership allows short-term things to get done, (because everyone agree's - things start to "happen") but often times contradictions are a consequence.
Strong leadership often sees short-term goals missed (because some parties are negatively impacted by certain decisions) but long-term success (because contradictions are resolved decisively as early as possible, reducing total wasted effort).

That isn't to say Strong is a more resilient leadership model.

Personally I think a hybrid model is the best, purely strong models require extremely intelligent (in EVERY discipline) individuals to attain the "leadership" role, in order for results to be positive. Since it is impossible to have a "super" person in leadership (no one is flawless) human mistakes are made.

In purely weak models (communism) the inefficiencies of the system (appeasing all stake holders) result in a huge waste of resources.

Obviously a middle ground is the best solution.

On the nose (1)

hessian (467078) | about 7 months ago | (#46832349)

This is the definition of weak leadership.

He pointed to weak/absent leadership and tried to use it as an argument against the need for strong leadership.

Re:Recognize limitations of volunteer efforts (3, Interesting)

greg1104 (461138) | about 7 months ago | (#46830283)

Self-organization [wikipedia.org] is a perfectly reasonable way to run a project. It has several properties that are useful for geographically distributed open source projects, like how it avoids a single point of failure. You can't extrapolate global leadership maxims from the dysfunction of local groups you've been involved in. I'd argue that an open source program that requires "strong leadership" from a small group to survive is actually being led badly. That can easily breed these troublesome monocultures where everyone does the same wrong thing.

I think the way Mark Shuttleworth organizes Canonical is like the traditional business role of a "strong leader". That's led to all sorts of pissed off volunteers in the self-organizing Debian community. Compare that against the leadership style of Linus Torvalds, who aggressively pushes responsibility downward toward layers of maintainers. The examples of Debian and Linux show volunteers can organize themselves if that's one of the goals of the project.

Nope (1)

Eskarel (565631) | about 7 months ago | (#46830715)

No, it isn't, at least not without really exceptional leadership.

Linus holds himself to exceptionally high standards of work, standards which he expects everyone else who commits to the kernel to also adhere to. He's also a complete and total asshole and will think nothing of publicly chastising anyone who doesn't. Self Organisation works for the Linux kernel because for one, only the very best of the best are actually allowed commit privileges and for another anyone who fucks up or gets slack will be caught and will be punished. This means that the people self organize to do the right things.

Debian doesn't write software, at least not much anyway, so they don't really count in the same way. I'd also hazard the guess that the core of Debian is actually rather tightly organised with package maintainers largely being self organised.

Specious Argument (3, Insightful)

Nethemas the Great (909900) | about 6 months ago | (#46828727)

I'm not sure it's a valid argument. The probability of errors that may be found in a given system is proportional to the complexity of that system. Likewise the cost to maintain and evolve a system is proportionally tied to its complexity. It is therefore a worthy to goal to reduce system complexity whenever possible. If network communication infrastructure is taken to be the system, then it naturally follows that the fewer implementations that exist for performing SSL/TLS communication the less likely there will exist security vulnerabilities. Relatedly the cost to identify and correct vulnerabilities will be proportionally smaller. Said simply, it's much easier to guard one door than it is to guard many.

Suggesting that a "monoculture" is bad relies upon the same faulty premises of "security through obscurity." The failure with respect to OpenSSL and Heartbleed wasn't the monoculture. It was the lack of altruistic eyes scrutinizing it. More implementations would have only required more eyes.

Re:Specious Argument (1)

bill_mcgonigle (4333) | about 7 months ago | (#46829375)

It was the lack of altruistic eyes scrutinizing it.

That was a secondary effect. People who might want to analyze code want to do a good job, and there's a lot of code worth analyzing.

To do that job there are tools that help with that analysis. OpenSSL's use of non-standard internal memory management routines makes it resistant to use of such analysis tools.

Is it impossible for a code auditor to keep everything in his head? No, but it's tough and error-prone. Some people have found OpenSSL bugs before, of course, but there are ways to make it easier for auditors to stand a fighting chance.

That's largely what the OpenBSD team is doing - ripping out all of that unneeded memory management crap, killing OS/2, VMS, and MacOS7 support code, etc. The payoff should be more people looking at it, but it sure wouldn't hurt for some companies that save millions by using OpenSSL to throw the team a few bones once in a while to make it more regular. Or hire their own internal folks to do the same, if that would work out better.

Re:Specious Argument (1)

WaffleMonster (969671) | about 7 months ago | (#46830077)

That's largely what the OpenBSD team is doing - ripping out all of that unneeded memory management crap, killing OS/2, VMS, and MacOS7 support code, etc. The payoff should be more people looking at it,

Pairing down unnecessary special memory routines and any silliness enabled by them is likely to be productive.

Removing windows code appreciably reduces pool of interested parties and hence the number of people who care to audit your OpenSSL fork.

Re:Specious Argument (1)

Anonymous Coward | about 7 months ago | (#46829485)

Completely incorrect.

Monocultures are proven disastrous for the long term survivability of anything.

There is a reason human immune systems have evolved the way they have over millions of years. That is one in which everyones immune system behaves differently to issues. Some are more effective against one type of disease, and less effective against another.

If everyone had the same immune system (monoculture) or exactly the same genetics, then a single virus capable of exploiting that fact could theoretically wipe out the entire species. Make it contagious enough, but with a long enough dormant phase and it would literally kill everyone.

In the same way that monocultures are so dangerous to living organisms, they are dangerous in security implementations.

As heartbleed has proven, the bug was so widespread and damaging (17% of all secure web sites?) it had the capacity to cripple the worlds economy.

Now imagine that every secure website on the planet was running the same SSL implementation. The heartbleed bug has proven that it only takes one bug in the right place to completely compromise the entire secure certificate system.

Your contention that there would be less bugs doesn't matter. There is no situation in any sufficiently complex system for there to be no bugs, therefore it just takes time for the "one" bug to come along with the right characteristics to cause an entire, and complete collapse of the system.

Being against monocultures is not security by obscurity, it is survival by diversity. And is an extremely well established (mathematically) method/system for continued long term overall (ie system wide - not individual systems) safety.

Re:Specious Argument (0)

Anonymous Coward | about 7 months ago | (#46831095)

You're conflating two different things: genetic monoculture is bad because genes cannot actively identify threats and quickly adapt to threats in the environment.
In open-source software a monoculture means that every user can potentially identify and fix a problem (which is exactly what happened here).

Survival by diversity is a valid strategy for genes since one threat may permanently eliminate an entire population of a given strain, but in software, you're one patch away from a permanent fix.

You still get the same disadvantage of a genetic monoculture: a threat to one system is a threat to the entire ecosystem, but the presence of such bugs is proportional to the amount of users.
Splitting the functionality into multiple implementations will only serve to have a lot of implementations which are all a bit more vulnerable while preserving the ecosystem better since each bug can only be exploited in that reduced space.

This is especially true in the security field due to the complexities involved with properly implementing and analysing cryptography. Reducing the size of the scope to a single reference library improves security by focusing most of the available crypto talent on it.

Of course software is never perfect, but if I had to use crypto in my software today I'd still go with OpenSSL (or better yet - LibreSSL) despite the recent news.
You're of course always welcome to go with some other library you found on GitHub, but don't complain when your software can be trivially cracked by an experienced hacker.

Re:Specious Argument (0)

Anonymous Coward | about 7 months ago | (#46830515)

how did this pseudo-intellectual garbage get modded up? did i stumble onto bizarro slashdot, news for managers?

Re:Specious Argument (0)

Anonymous Coward | about 7 months ago | (#46830903)

Design redundancy is a good property in a critical system. It is just normally difficult to argue for in front of a cost cutting CFO, or a bunch of hackers just wanting to have some fun. Software modularity is also not there to reduce the costs.

OpenBSD's Fork Is The Answer (1)

Mike Greaves (1236) | about 6 months ago | (#46828871)

Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.

Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.

Do they have a new name for it yet?

If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...

DOH! Re:OpenBSD's Fork Is The Answer (1)

Mike Greaves (1236) | about 6 months ago | (#46828903)

It looks like they called it: LibreSSL

http://www.libressl.org/

That's what it looks like, anyway.

Support them if you can!

Re:OpenBSD's Fork Is The Answer (1)

Anonymous Coward | about 6 months ago | (#46828925)

Too bad they hadn't forked OpenSSL a while back. Now there is a competing library.

Now we need to support that fork, and assess the feasibility of porting to Linux as well as the other BSD's, of course.

Do they have a new name for it yet?

If SSL = "Secure Sockets Layer", how about: ActualSSL (it's actually secure), DaemonSSL, Pitchfork(ed)SSL, something...

We need a GNU/SSL fork too. GPL forever!

Re:OpenBSD's Fork Is The Answer (1)

Above (100351) | about 7 months ago | (#46829071)

We need a GNU/SSL fork too. GPL forever!

It exists, and is called GnuTLS [gnutls.org] . All the developers I've worked with who've looked at it and OpenSSL say it is worse than OpenSSL, although I don't remember the particulars of why. Feel free to support it if you prefer a GNU alternative.

Re:OpenBSD's Fork Is The Answer (0)

Anonymous Coward | about 7 months ago | (#46830539)

I wouldn't say it's worse than OpenSSL. But for something starting from scratch, and after the SSL protocol more-or-less stabilized (i.e. after TLS 1.0), it's needlessly complex. Given the same amount of time to bit-rot as OpenSSL had, it could easily end up worse.

Re:OpenBSD's Fork Is The Answer (1)

gatkinso (15975) | about 7 months ago | (#46829205)

PolarSSL.

Re:OpenBSD's Fork Is The Answer (2)

gatkinso (15975) | about 7 months ago | (#46829231)

PolarSSL is GPLv2

C language vs FOSS (0)

Anonymous Coward | about 7 months ago | (#46829381)

Some can say C language is a problem, maybe they are right it will do more harm to FOSS, then proprietary software ever will...

Re:C language vs FOSS (1)

gnupun (752725) | about 7 months ago | (#46831021)

Only C (and its derivatives C++, Obj-C) seem to suffer from this buffer overrun issues because of its heavy dependence on pointers. Most other languages Pascal/Delphi, Java, Python, Basic have bounds checking and will not allow the programmer to read/write a variable x, using another variable y where y has no link to x.

If Pascal, with runtime bounds checking enabled, had been used instead of C, you would've gotten the an equivalent executable with less than 1% speed loss (due to error checking) compared to C/C++.

not apples and microsofts (0)

Anonymous Coward | about 7 months ago | (#46829627)

Grouping together the Windows OS business monopoly and the wide use of OpenSSL fails to recognize a critical point. Windows OS is huge and closed. OpenSSL, for all the naive press about how big a code base it is, is actually pretty small (and could be much smaller) and completely open. Microsoft has a well-funded army to ensure its monopoly; laziness on the parts of users has made OpenSSL ubiquitous. The two situations are entirely different and their inherent problems must be solved in entirely different ways. The problem of Windows is a big one; a lot of money and time will have to be spent to deal with it, and probably won't be for a very long time. The problem with OpenSSL is not that big and is being dealt with promptly. The cost estimates we've been reading about are silly: the IT people are employed, anyway, so it's not like that much extra dough is being shelled out for dealing with the Heartbleed problem, and all that's happening is people are scurrying to do what should've been done all along.

For those criticizing OSS, let me ask: what's the solution? More closed-source monopolies? No! The solution, of course, is more OSS! What was the failure here? *Not enough* (to be precise: exactly one) OSS! In the future, we need >= 2 well-supported OSS projects fulfilling the same infrastructure need.

Different reasons (0)

Anonymous Coward | about 7 months ago | (#46830831)

Microsoft is a monoculture because of compulsion and despite of sucking more than the alternatives.

OpenSSL isn't quite a monoculture, there are a lot of alternatives, they just happen to suck more.

Incorrect. Monocultures are not bad in software. (1)

brunes69 (86786) | about 7 months ago | (#46831453)

There are pros and cons to a monoculture in code, but the pros vastly outweigh the cons.

It is no different than any other code-reuse system, from functions that are called from many places in a system, to common libraries, to open-source software running 1/2 the internet. Yes, if there is a bug in a widely-used piece of code, it affects a lot of parts of the system - and the more places it is used the worse the bug is - TEMPORARILY.

  The upside is, because this code is used in so many parts, these bugs are rarely missed because of the ramifications. And, when they do happen to be missed and are later discovered, when you fix the bug, the fix "fixes" the whole system at once, not just a piece of it and then you need to go check all the other pieces of the system to see if you need to make the same fix elsewhere.

If the web used many different SSL libraries, some open and some closed, that is not a solution, it is just opening more vectors for attacks and bugs. The more eyes on the code and the more people using it, the fewer the bugs will be. Reducing your attack surface is a HUGE part of security, in fact one of the #1 things. Reuse of code libraries is a major way to reduce that vector.

Definitely felt the monoculture hit before! (1)

danielzip53 (1717992) | about 7 months ago | (#46831937)

When the Tsunami hit the east coast of Japan and caused so much saddening destruction.

Our industry (MFP) was hit with a shortage of faxes, and this was purely down to a single low cost chip from a factory in the Tsunami effected region. It was a single point of weakness that no manufacturer was aware of. The supplier companIES were actually just all distributors for the one single producer/factory of the chip.

Lucky the chip was simple and manufacturing was moved to another factory in Chine, but it still took several months for stock to reach a nominal level.

Everyone in the IT industry knows that single points of failure are the most critical aspect of any process. And it's not just the IT industry it's the same in all industries, eg logistics, mining, agriculture, etc. Hence why process design is big in all industries (or getting there).

What monoculture? (1)

xxxJonBoyxxx (565205) | about 7 months ago | (#46832143)

OK - here's a niche industry page listing about forty open source, commercial and cloud solutions that all have secured by SSL and their responsed to heartbleed:
http://www.filetransferconsult... [filetransf...ulting.com]

Of these...maybe a third had OpenSSL...most of the rest used a Java stack, and many of the rest were on IIS or using MS crypto. Within my own company (about 1500 people and 20 web apps on a mix of platforms), heartbleed affected exactly 3 sites.

If you looked around other industries and saw >50% affected rates maybe I'd believe "monoculture"...but if you're talking the entire web dev world, OpenSSL is just one of the top options.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?