Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Theo De Raadt's Small Rant On OpenSSL

timothy posted about 5 months ago | from the heartbleed-of-the-matter dept.

Encryption 301

New submitter raides (881987) writes "Theo De Raadt has been on a better roll as of late. Since his rant about FreeBSD playing catch up, he has something to say about OpenSSL. It is worth the 5 second read because it is how a few thousand of us feel about the whole thing and the stupidity that caused this panic." Update: 04/10 15:20 GMT by U L : Reader badger.foo pointed out Ted Unangst (the Ted in the mailing list post) wrote two posts on the issue: "heartbleed vs malloc.conf and "analysis of openssl freelist reuse" for those seeking more detail.

cancel ×

301 comments

Sorry! There are no comments related to the filter you selected.

"It's Not a Tumor" - Oh Wait, It Is (4, Interesting)

alphatel (1450715) | about 5 months ago | (#46714171)

This could get a lot more ugly...

Once upon a time, SSL certificates were signed against a single root certificate, each SSL cert issuer had a single root certificate authority for each of its product lines. Now all corps issue an SSL certificate that is signed against and INTERMEDIATE certificate, which in turn is signed against the root certificate.

What happens if a provider's server has this exploit and the intermediate certificate is compromised? EVERY certificate signed against that intermediate must be revoked. Or put another way, the ENTIRE PRODUCT LINE must be tossed into the garbage and all certs reissued.

So if Verisign or Thawte discover new their intermediate certificate MIGHT have been exploited, would they say anything? The servers implementing those certs are in the hands of a select few - it would be easy to hide the possibility they might have been compromised.

NO sense (-1)

Anonymous Coward | about 5 months ago | (#46714321)

You throw you hands about madly like a monkey. GO away foolish clown. Oh look the sky. It is falling down.

Re:"It's Not a Tumor" - Oh Wait, It Is (5, Insightful)

putaro (235078) | about 5 months ago | (#46714335)

If your intermediate certificate's signing keys are on the internet facing web servers you're doing it wrong. That intermediate signing key should be treated with the same level of security you would treat a root key with.

Re:"It's Not a Tumor" - Oh Wait, It Is (3, Interesting)

Krojack (575051) | about 5 months ago | (#46714833)

As we all know, most high level hacks these days come from an internal computer getting infected with something.

Re:"It's Not a Tumor" - Oh Wait, It Is (1)

Anonymous Coward | about 5 months ago | (#46715083)

Exactly. Even air gapping is not a 100% effective defense.

wrong (-1)

Anonymous Coward | about 5 months ago | (#46715199)

wrong

Re:wrong (0)

Anonymous Coward | about 5 months ago | (#46715321)

wrong

Re:"It's Not a Tumor" - Oh Wait, It Is (5, Informative)

Anonymous Coward | about 5 months ago | (#46714363)

It is good practice to sign against an intermediate certificate. That way if it is compromised you can reject it and issue a new intermediate certificate signed by your root certificate. You can push the new certificates as updates since they would be validated against the root certificate.

You need to read up on authenticating the entire chain of certificates.

Re:"It's Not a Tumor" - Oh Wait, It Is (2)

Bill, Shooter of Bul (629286) | about 5 months ago | (#46714503)

I don't understand what you are trying to imply by contrasting the old way without intermediate vs the new way with intermediate certs. Surely, its better with the intermediates as the root cert that signs it should be and can be used less often and stored in cold storage somewhere.

Re:"It's Not a Tumor" - Oh Wait, It Is (5, Informative)

Anonymous Coward | about 5 months ago | (#46714691)

I don't get what you're saying, and I think that's probably because you don't know what you're talking about. Having certificate chains is only a plus, the flat structure was crap. Here's how it works:

I have a root certificate that's universally trusted. It is used *only* to sign intermediate certificates. Having the public cert in the public is fine since it only contains the public key part of the asymmetric public/private key pair. The private key, sits on a server which is physically isolated from the world. By that, I mean that the root certificate and private keys are literally on servers with *no* network connections. When you want to generate a new intermediate certificate, you put the CSR on a USB stick, plug it in, and sign it from that machine. In this way you never have to worry about external threats potentially gaining access to your private key (internal are an ever constant threat, and you put in good safeguards to prevent against that).

Now that you have a chained hierarchy, you can use different intermediates to sign different end user certificates. Remember that both the root and intermediate have their own certificate revocation lists: the root can revoke intermediates (which means anything signed by them is null and void) and intermediates can revoke server or subordinate intermediate certs.

As a result of my chained hierarchy, if an intermediate is compromised, I can revoke it, without revoking every single end user / server certificate out there. This gives me finer grained control.

Now, I said that the root isn't even connected to the internet. The intermediate ideally is not either. Ideally the user / server signing intermediate is behind a set of firewalls, and *pulls* signing requests from frontends, that it then processes and posts the resulting signed certs to. That way if you compromise the front end, you have to the compromise the firewall (which ideally is blocking all inbound connection requests / unconnected sockets / anything that is not communication required for the intermediate server(s) to send pulls to the frontends) in order to get to the intermediate.

Your flat view of the world is draconian, wrong, uneducated, and probably hurts everyone who reads it by making them a little less educated.

Natural Born Cancer (5, Insightful)

VortexCortex (1117377) | about 5 months ago | (#46715047)

Well, what you are pointing out is that a CA is a single point of failure -- Something actual security conscious engineers avoid like the plague. What you may not realize is that collectively the entire CA system is compromised by ANY ONE of those single points of failure because any CA can create a cert for ANY domain without the domain owner's permission. See also: The Diginotar Debacle. [wikipedia.org]

The thing is, nobody actually checks the the cert chain, AND there's really no way to do so. How do I know if my email provider switched from Verisign to DigiCert? I don't, and there's no way to find out that's not susceptible to the same MITM attack.

So, let's take a step back for a second. Symmetric stream ciphers need a key. If you have a password as the key then you need to transmit that key back and forth without anyone knowing what it is. You have to transmit the secret, and that's where Public Key Crypto comes in, however it doesn't authenticate the identity of the endpoints, that's what the CA system is supposed to do. Don't you see? All this CA PKI system is just moving the problem of sharing a secret from being the password, to being which cert the endpoint is using -- That becomes the essential "secret" you need to know, and it's far less entropy than a passphrase!

At this time I would like to point out that if we ONLY used public key crypto between an client and server to establish a shared secret upon account creation, then we could use a minor tweak to the existing HTTP Auth Hashed Message Authentication Code (HMAC) proof of knowledge protocol (whereby one endpoint provides a nonce, then the nonce is HMAC'd with the passphrase and the unique-per-session resultant hash provides proof that the endpoints know the same secret without revealing it) to secure all the connections quite simply: Server and client exchange Nonces & available protocols for negotiation, the nonces are concatenated and HMAC'd with the shared secret stored at both ends, then fed to your key-stretching / key expansion system AND THAT KEYS THE SYMMETRIC STREAM CIPHER SIMULTANEOUSLY AT BOTH ENDS so the connection proceeds immediately with the efficient symmetric encryption without any PKI CA system required.

PKI doesn't really authenticate the endpoint, it just obfuscates the fact that it doesn't by going through the motions and pretending to do so. It's a security theater. SSL/TLS and PKI are essentially the Emperor's New Secure Clothes. At least with the shared secret model I mention above, there's just that one-time small window of PK crypto for secret exchange at worst (failing to intercept account creation means no MITM) and at best you would actually have the CHANCE to go exchange your secret key out of band -- Visit your bank in person and exchange the passphrase, etc. then NO MITM could intercept the data. HTTP Auth asks for the password in a native browser dialog BEFORE showing you any page to login (and it could remember the PW in a list, or even generate them via hashing the domain name with a master PW and some salt so you could have one password for the entire Internet). That's how ALL security should work, it ALL relies on a shared secret, so you want the MOST entropic keyspace not the least entropic selection (which CA did they use). If you're typing a password into a form field on a web page, it's ALREADY game over.

Do this: Check the root certs in your browser. For Firefox > Preferences > Advanced > Certificates > View. See that CNNIC one? What about the Hong Kong Post? Those are Known bad actors that your country is probably at cyber war with, and THEY ARE TRUSTED ROOTS IN YOUR FUCKING BROWSER?! Not to mention all the other Russian ones or Turkish, etc. ones that are on the USA's official "enemy" list. Now, ANY of those can pretend to be whatever domain's CA they want, and if your traffic bounces through their neck of the woods they can MITM you and you'll be none the wiser. Very few people if anyone will even inspect the cert chain which will show the big green bar, and even if they do they really can't know whether the domain has a cert from these trusted CAs just to comply with that country's laws or whatever.

So, I would put it to you this whole "Heartbleed" business is totally overblown. If you're NOT operating under the assumption that the entire TLS/SSL Certificate Authority / Public Key Infrastructure isn't purposefully defective by design and that all your keys are bogus as soon as they are created, then you're just an ignorant fool. Heartbleed doesn't change jack shit for me. My custom VPN uses the HMAC key expansion protocol mentioned above. I don't do ANYTHING online that I wouldn't do on the back of a post-card, because that's the current level of security we have.

I would STRONGLY encourage you to NOT TRUST the IETF or ANY security researchers who think the SSL/TLS system was ever a secure design. It was not ever secure, and this has been abundantly clear to everyone who is not a complete and utter moron. Assuming that the entire web security field isn't completely bogus is bat-shit insane.

When comments... (2)

darkain (749283) | about 5 months ago | (#46714241)

When Slashdot comments become full front page stories? This was already posted a few times as comments in the last OpenSSL post.

Re:When comments... (5, Insightful)

Anonymous Coward | about 5 months ago | (#46714283)

Theo De Raadt is a noteworthy personality, same as Linus Torvalds or Bill Gates. Their comments on important matters are important.

Re:When comments... (5, Insightful)

MightyMartian (840721) | about 5 months ago | (#46714411)

As much as Theo can be an utter and insufferable prick, on this score he's right. This was an insanely trivial error which has exposed who knows how many systems to potential breaches. Right now I'm starting up a full audit of our systems. We use OpenVPN for our interoffice WAN, as well as for clients; many of them Windows, iOS and Android clients, not to mention reviewing all our *nix clients running SSH daemons. We're only a relatively small operation, and it's still a monumental pain in the ass.

Re:When comments... (1)

Ukab the Great (87152) | about 5 months ago | (#46714533)

Theo De Raadt makes comments about Linux being for losers. Linus Torvalds makes comments about OpenBSD users being masturbating monkeys. You usually have to take some of their comments with a grain of salt.

not developed by a responsible team? (1, Interesting)

G3ckoG33k (647276) | about 5 months ago | (#46714559)

De Raadt wrote "OpenSSL is not developed by a responsible team".

On the contrary, I believe it was developed by a responsible team, that unfortunately made an error.

Most everyone have made errors, even if most go unnoticed and are essentially harmless. This one appears different, but I don't think it justifies De Raadt's moronic comment.

Re:not developed by a responsible team? (5, Insightful)

tsalmark (1265778) | about 5 months ago | (#46714641)

My understanding is Theo said: Developers on a security product made a conscious decision to turn off last line of defence security for all platforms in the interest of performance on some systems. That does not sound like and unfortunate error to me, it sounds outright irresponsible.

Re:not developed by a responsible team? (4, Insightful)

squiggleslash (241428) | about 5 months ago | (#46714729)

He said that, but is that what happened? Were OpenSSL's developers aware that malloc()/free() have special security concerns that OpenBSD's developers had specifically addressed (I assume that's what meant by "a conscious decision to turn off last-line-of-defense-security")

I understand Theo's point, to a certain degree I kinda understand it, but I'm more inclined to feel the problem is with OpenSSL's developers clearly not understanding the security concerns about malloc(). That is, if they were aware that OpenBSD's malloc() contained code to ensure against data leakage, it would seem to me to be highly probable they would have implemented the same deal in OpenSSL given, you know, their entire point is security. The fact they didn't makes me think they didn't know OpenBSD's malloc() had these measures in the first place.

Should they have done? And how should they have known? Genuine question, and finger pointing would be inappropriate right now: how do we make sure that certain security strategies and issues are as well known as, say, stack pointer issues are today.

Re:not developed by a responsible team? (3, Informative)

EvanED (569694) | about 5 months ago | (#46714853)

Were OpenSSL's developers aware that malloc()/free() have special security concerns that OpenBSD's developers had specifically addressed (I assume that's what meant by "a conscious decision to turn off last-line-of-defense-security")

My impression is OpenBSD's hardened allocator is relatively common knowledge and definitely should be among people writing security software. And that's not even remotely the only such allocator out there that does that sort of thing too, though it's probably the most well-known from the industrial side.

Re:not developed by a responsible team? (4, Informative)

Eunuchswear (210685) | about 5 months ago | (#46714975)

That is, if they were aware that OpenBSD's malloc() contained code to ensure against data leakage, it would seem to me to be highly probable they would have implemented the same deal in OpenSSL given, you know, their entire point is security. The fact they didn't makes me think they didn't know OpenBSD's malloc() had these measures in the first place.

Not just OpenBSD's malloc(). glibc can do the same thing if you set MALLOC_PERTURB.

Re:not developed by a responsible team? (5, Informative)

Eunuchswear (210685) | about 5 months ago | (#46715013)

Oh, and read this: http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse [tedunangst.com]

In effect at some points OpenSSL does:


        free (rec); ...
        rec = malloc (...);

and assumes that rec is the same.

Eeew,

Re:not developed by a responsible team? (4, Interesting)

squiggleslash (241428) | about 5 months ago | (#46715077)

Ouch. Serious ouch. Thank you. That suggests that the situation is considerably worse than De Raadt said.

Re:not developed by a responsible team? (4, Insightful)

mrchaotica (681592) | about 5 months ago | (#46715237)

Should they have done? And how should they have known? Genuine question, and finger pointing would be inappropriate right now: how do we make sure that certain security strategies and issues are as well known as, say, stack pointer issues are today.

Hell yes they should have known, because the people responsible for one of the most important security applications in the entire world damn well ought to be experts!

Re:not developed by a responsible team? (-1, Flamebait)

timeOday (582209) | about 5 months ago | (#46714989)

No more irresponsible than writing the software in C in the first place. If you wanted checks like this universally enforced, you would use a language that doesn't require you to remember to do them every single time. The heartburn that comes with higher-level languages is exactly the type of heartburn that caused this check to be disabled.

I don't put much stock in retrospective finger-pointing. Almost all bugs are trivial in retrospect.

Re:not developed by a responsible team? (0)

Anonymous Coward | about 5 months ago | (#46714665)

Responsible or not, this error happened on their watch, and it is a big one, potentially one that gets UNIX in general tossed for "more secure" Microsoft machines at every juncture. This error is going to cost millions in dollars and hundreds of thousands of man-hours to rectify... and this is just closing the hole. Bog knows what damage that exploiters have done with this since SSL is critical to virtually -every- fscking thing on the Internet.

Re:not developed by a responsible team? (1)

WaywardGeek (1480513) | about 5 months ago | (#46714683)

Sometimes the individuals involved can be responsible while the team acts irresponsibly. For example, why is my passphrase of my id_rsa key protected by only one round of hashing with no option for increased rounds [sys4.de] ? I hear there are good things coming, like being able to use bcrypt, but this is a scandal. Only a security ignorant fool would want his passphrase attached to an id_rsa key with no password stretching at all. So... how many fools do we have out there? I surely hope you weren't counting on your passphrase being secure just because the OpenSSL team was involved.

Re:not developed by a responsible team? (1)

Anonymous Coward | about 5 months ago | (#46714731)

er, a foundational security component of most 'secure' sites on the web has been leaking who-knows-what information for who-knows-how-long over this. De Raadt can be a bit over-the-top at times, but his comment is neither moronic nor does it need you to consider it justified.

Re:not developed by a responsible team? (1)

dsparil (844576) | about 5 months ago | (#46714767)

The irresponsible part is that OpenSSL does not even compile if you decide your system malloc is fine to use. It is impossible to avoid using OpenSSL's buggy allocator.

Re:not developed by a responsible team? (3, Informative)

Anonymous Coward | about 5 months ago | (#46715005)

De Raadt wrote "OpenSSL is not developed by a responsible team".

On the contrary, I believe it was developed by a responsible team, that unfortunately made an error.

Most everyone have made errors, even if most go unnoticed and are essentially harmless. This one appears different, but I don't think it justifies De Raadt's moronic comment.

Not so sure they're responsible.

Did you read this [tedunangst.com] ?

This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc (not even a security focused malloc, just one that frees memory every now and again). Instead, it lay dormant for years until I went looking for a way to disable their Heartbleed accelerating custom allocator.

Building exploit mitigations isn’t easy. It’s difficult because the attackers are relentlessly clever. And it’s aggravating because there’s so much shitty software that doesn’t run properly even when it’s not under attack, meaning that many mitigations cannot be fully enabled. But it’s absolutely infuriating when developers of security sensitive software are actively thwarting those efforts by using the world’s most exploitable allocation policy and then not even testing that one can disable it.

The OpenSSL team doesn't fully test their product.

That's pretty much as good an example of incompetence that you can probably find.

Re:not developed by a responsible team? (1)

EvanED (569694) | about 5 months ago | (#46715153)

The OpenSSL team doesn't fully test their product.

I agree with Theo on the broader point, but disagree here: "their product" is the code they wrote with their custom memory allocator. It's not the code they wrote with some changes someone outside the product made. Their custom allocator isn't disabled or enabled depending on configuration checks or something like that, it's always part of the product.

It was still a pretty stupid decision.

Re:not developed by a responsible team? (3, Interesting)

Anonymuous Coward (1185377) | about 5 months ago | (#46715329)

This bug would have been utterly trivial to detect when introduced had the OpenSSL developers bothered testing with a normal malloc (not even a security

This is simply not true, stop spinning it.

Even if OpenSSL is using system's malloc, with all its mitigation features, the bug still works. The attacker just has to be more careful, lest he should read freed() and unmapped memory, and so cause a crash and (supposedly) leave some kind of meaningful trail.

Re:not developed by a responsible team? (1)

Anonymous Coward | about 5 months ago | (#46715041)

in light of this analysis: http://www.tedunangst.com/flak... [tedunangst.com] i'm inclined to agree with Theo on this one...

Re:When comments... (1)

Anon, Not Coward D (2797805) | about 5 months ago | (#46714313)

Come on... sometimes a comment is so informative that everyone should read it :)

Re:When comments... (0)

Anonymous Coward | about 5 months ago | (#46714491)

When Slashdot comments become full front page stories? This was already posted a few times as comments in the last OpenSSL post.

When the pool of quality submissions is very shallow, like it has been here for quite some time.

Re:When comments... (1)

Bill, Shooter of Bul (629286) | about 5 months ago | (#46714595)

Because its Theo. HIs rants are usually pretty funny and thought provoking.

This one, I have to admit, is pretty tame/lame. Good security design thoughts as always, but not much of a rant.

5 second read? (1)

Anonymous Coward | about 5 months ago | (#46714259)

How the hell can you read 298 words in 5 seconds?

Re:5 second read? (5, Funny)

ObsessiveMathsFreak (773371) | about 5 months ago | (#46714579)

How the hell can you read 298 words in 5 seconds?

The trick is not to read TFA in the first place. Also, you must be new here.

Re:5 second read? (1)

gweihir (88907) | about 5 months ago | (#46715349)

You must be new to this "Internet" thing: It is easy! Just do not try to understand anything!

Summary for the lazy ones (5, Informative)

Anonymous Coward | about 5 months ago | (#46714281)

Years ago the BSD guys added safeguards to malloc and mmap, but they were disabled for all platforms in OpenSSL only because they caused performance problems on some platforms. He finishes by saying that OpenSSL is not developed by a responsible team.

Still don't get it (0)

Anonymous Coward | about 5 months ago | (#46714525)

Where can I find Theo De Raadt's "better roll"? I can imagine him being "on a roll", but on a better roll? That must really be something... better.

More commentary from OpenBSD's Ted Unangst (5, Informative)

badger.foo (447981) | about 5 months ago | (#46714297)

OpenBSD developer Ted Unangst (mentioned in the article) has gone into the code a bit more in two articles, both very well worth reading:

heartbleed vs malloc.conf [tedunangst.com]

and

analysis of openssl freelist reuse [tedunangst.com] . Short articles with a lot of good information.

Summary. (4, Insightful)

jythie (914043) | about 5 months ago | (#46714367)

So as far as I can tell, his rant is essentially that people should not use custom allocators and instead rely on the general purpose one built into libc because they can add system wide tools there.

I can see the argument for most cases, that is kinda the point of a general purpose allocator, but encryption (esp if you are doing lots of it) really strikes me as a case where you can really benefit from having explicit control over the behavior. I have worked on a number of applications where custom allocators had significant (user facing, not just benchmarks) impacts on performance. Ironically it also meant we were able to do better checking then the general exploit detection kits since we could bake more specific knowledge into the validator.

Re:Summary. (5, Insightful)

mr_mischief (456295) | about 5 months ago | (#46714575)

That's all true and correct. When you do that, though, you need to do at least as good a job as what you're circumventing. In this case OpenSSL didn't.

Re:Summary. (2, Insightful)

jythie (914043) | about 5 months ago | (#46714631)

True, they did not, but I would put that at the level of mistake rather then being unreasonable.

Re:Summary. (5, Insightful)

Anonymous Coward | about 5 months ago | (#46715099)

but if your explicit reason for overriding the kernel is to do something as-good-or-better than the kernel, and you fail to achieve that (let alone validate that you've achieved it), that's a rather spectacular failure. mistakes are common, sure. but this is sort of like saying the seatbelts that came with your car suck (they very well might), so you replace them with something "better", that ends up being little more than a piece of string that you daintily lay across your lap. kind of "unreasonable", no...?

Re:Summary. (0, Troll)

bluefoxlucid (723572) | about 5 months ago | (#46714597)

I can see his point; however he's still wrong. Mostly because he's Theo, though I'll credit he seems to have quietly accepted some of the things he was wrong about in the past.

So then a bug shows up which leaks the content of memory mishandled by that layer. If the memoory had been properly returned via free, it would likely have been handed to munmap, and triggered a daemon crash instead of leaking your keys.

And then the five servers in the world running OpenBSD would be safe.

The moment the bug was discovered, we were all fucked. The bug would have gone out readily. This particular abuse wasn't tested, obviously, or someone would have gone, "Wow look at that! It gave back a chunk of data LOL! Let's fix that before relase!" So if they would have satisfied Theo, OpenSSL still wouldn't have been tested and wouldn't have crashed, and would be released vulnerable.

That release gets on every SSL-enabled server, including into products like SonicWall VPN and firewalls. These are vulnerable--I've tested them here, the exploit actually works. FortiNet's products--FortiMail, FortiGate--these are vulnerable. CentOS 6 is vulnerable. Debian is vulnerable. Ubuntu since forever is vulnerable. Fedora is vulnerable. SuSE is vulnerable. A bunch of shit on Windows is vulnerable.

Then we realize there's a bug.

Let's talk about someone else who's wrong: the OpenSSL team. The OpenSSL developers are pissed off because of the full disclosure practice that put this bug out there before a patch was released. They think responsible disclosure would be better. So you disclose responsibly, and the patches come out. Some hackers see new OpenSSL release, and look into it. They see malloc() fixes in networking code, or you put up a big "THIS IS A SECURITY PATCH" notice.

Start your engines, motherfuckers.

Distros start rebuilding immediately because there's an automatic trigger, and because they know already: there's a huge network nobody knows about made up of distro maintainers and upstream programmers all sharing security and bugfix information. Ubuntu's devs were ready for this shit before it was released, and probably would have had packages built before the source went out. Debian, RedHat, SuSE.

Twelve hours later, sysadmins start noticing.

By then, someone has reverse-engineered the patches. The moment packages start propagating, the source files are up--SRPMs are out. By the time you can see and react, someone sees "SECURITY FIX", reads the diff, the best ones can do an exploit in half an hour for something like this. You might be hacked before you notice the patch.

So it would be better with responsible disclosure. The problem is this isn't a hack you'd see: you don't know if you're hacked. You wake up and your security is now handled by Schrodinger's Cat: your SSL keys are secure or not secure, and you don't know until you shake down the blackhats who have your keys, and maybe they bury the information when you do that so you still don't know. Not only that, but maybe blackhats secretly had the hack before it was noticed by a security researcher--that's rather common.

The moment this thing went out, it was too late. Your security is broken. And most people aren't on OpenBSD, so they'd get hacked even with OpenSSL using the system malloc(). They'd get hacked even with OpenBSD's protections, because OpenSSL is running almost always on Linux and OpenBSD's protections aren't on Linux. They'd get hacked anyway. That means everyone.

So no, Theo. You're not helping. You didn't discover the keystone that would save us all. Maybe god damn EVERYONE should implement these protections everywhere; glibc has some, but long reads won't crash glibc unless you hit unmapped.

Congrats Theo, your reputation continues.

Re:Summary. (5, Insightful)

akpoff (683177) | about 5 months ago | (#46714755)

Theo's point isn't that OpenBSD users would have been safe. It's that had OpenSSL crashed on OpenBSD (or any OS with similar mitigation in place) it would have surfaced the bug much sooner...perhaps before a worldwide release. Once found it would have been fixed and merged upstream to benefit all users.

This is really a specific case of the larger point behind avoiding monoculture, whether OS or hardware. OpenBSD continues to support older architectures in part because it forces them to work through issues that benefit other platforms, especially the one most of us use daily: x86.

Re:Summary. (1, Troll)

bluefoxlucid (723572) | about 5 months ago | (#46714947)

The problem is it would have crashed on OpenBSD if someone would have tested this exploit on OpenBSD, meaning someone would have to be looking for the exploit, meaning someone would have found a bunch of data coming back anyway and gone "oh lol wtf?".

If someone crashed your server trying to exploit it, you would probably not notice; since there aren't many OpenBSD servers, probably nobody would notice that these attacks were happening and gone, "Whoa! A wild 0-day exploit!" And even if they had, there's all these non-OpenBSD servers that are getting hacked and nobody can say if they're hacked or not, so we just get ourselves into this exact situation sooner. We don't come away with smaller collateral damage; EVERY SSL CERTIFICATE EVER ISSUED IS NOW INVALID.

Nothing Theo suggested changes the situation. Implementing malloc() protection everywhere might; but if you can show any ability to beat that protection a percentage of the time, then we're also in the same situation. We're talking about reads, so canaries aren't it. If you're crashing out on reads, then every malloc(1) that crashes if you read 2 requires 4096 bytes of real RAM to store 1 byte of data--we get into costs.

Re:Summary. (1)

JDG1980 (2438906) | about 5 months ago | (#46715069)

If you're crashing out on reads, then every malloc(1) that crashes if you read 2 requires 4096 bytes of real RAM to store 1 byte of data--we get into costs.

4096 bytes of RAM as an unacceptable cost? Seriously? Besides, how often are you repeatedly allocating really tiny buffers like this? If you have to do that, then maybe there's a more fundamental problem with the way your code flow is designed.

Re:Summary. (1)

KingOfBLASH (620432) | about 5 months ago | (#46715173)

I thought Theo's comments were more geared to the point that malloc() was implemented to be a sort of seat belt. In the event of a crash, if you're wearing your seat belt, you might still get killed, but you have a better chance of living. Same thing with malloc(). Sure maybe it wouldn't have helped, maybe even if there were good regression tests out there someone would also have missed it, but we'll never know if someone would have caught it, and in something that is definitely going to be a target for the black hat crowd, you should have some sort of security mindedness.

Theo is normally a bit of a wonk sometimes but on this issue, he's spot on.

Re:Summary. (1)

Anonymous Coward | about 5 months ago | (#46715007)

Furthermore, as Ted explains on his blog (already linked to in the comments), in this particular case OpenSSL would have aborted the connection on any system with a "standard" malloc - if only it were possible to reliably use that malloc, that is.

Re:Summary. (1)

BasilBrush (643681) | about 5 months ago | (#46715061)

Distros start rebuilding immediately because there's an automatic trigger, and because they know already: there's a huge network nobody knows about made up of distro maintainers and upstream programmers all sharing security and bugfix information.

"A huge network nobody knows about". The open source concept is fundamentally fucked, isn't it. Security issues happen with Windows or OS X, and it's clear - you're vulnerable, until you get a certain version of the OS, then it's patched.

Re:Summary. (1)

Kremmy (793693) | about 5 months ago | (#46715243)

The point is that they found OpenSSL to break when compiled without the malloc wrapper. OpenSSL relies on incorrect behavior in a malloc wrapper. Relies on it. Doesn't work without it. The result is the heartbleed bug. The result is that OpenSSL must be audited from the ground up and fixed so it doesn't require a broken memory allocation routine. Theo is still right.

audit (0)

Anonymous Coward | about 5 months ago | (#46714379)

maybe its time to re-audit all the openssl code

Why OpenSSL is so popular? (5, Interesting)

sinij (911942) | about 5 months ago | (#46714383)

Why OpenSSL is so popular? It has FIPS-certified module, and this becomes important for selling your product to the government.

So what could be done to prevent something like this from happening in the future? People will keep writing bad code, this is unavoidable, but what automated tests could be run to make sure to avoid the worst of it? Someone with direct development experience please educate the rest of us.

Re:Why OpenSSL is so popular? (-1, Troll)

Anonymous Coward | about 5 months ago | (#46714473)

Open SORES period = popular since /. bullshitters "p.r.'d it" (lied) to uneducated masses only to have it start blowing up in their faces now lately (another one to witness as to that is ANDROID, a Linux, being shredded daily with exploits - despite YEARS of /. b.s. to the effect of "Linux = Secure, Windows != Secure" crap that other dolts believed, stupidly). Now, the real truth's out, and you see the 'spinmasters' for it making outright excuses for their screwups and lies.

Re:Why OpenSSL is so popular? (1)

sinij (911942) | about 5 months ago | (#46714543)

OpenSSL is still better than the alternative - home-brewing your own crypto.

Re:Why OpenSSL is so popular? (0)

Anonymous Coward | about 5 months ago | (#46714741)

I hear this all the time, not to say it is not right, but are there examples of small companies that got burned for home-brewing their own crypto?

Re:Why OpenSSL is so popular? (1)

sinij (911942) | about 5 months ago | (#46715125)

Tons, my favorite example is encrypting ransomware that messed up key length and as a result could be brutforced.

http://blog.cassidiancybersecu... [cassidianc...curity.com]

Re:Why OpenSSL is so popular? (0)

Anonymous Coward | about 5 months ago | (#46715223)

Even if you can make a secure stack, it would be a significant effort to implement somthing like TLS, and it requires detailed understanding about the cryptographic primitives it uses and how to make secure code.

"Open SORES" still beats the snot of MS (1)

walterbyrd (182728) | about 5 months ago | (#46714601)

Linux is far more secure than Windows. Also more reliable, boots faster. Does not spend ten minutes updated when I turn on, or turn off, PC.

Maybe you should take you childish troll somewhere else?

Re:"Open SORES" still beats the snot of MS (0)

Anonymous Coward | about 5 months ago | (#46715233)

Linux is far more secure than Windows. Also more reliable, boots faster. Does not spend ten minutes updated when I turn on, or turn off, PC.

Maybe you should take you childish troll somewhere else?

Really? More reliable? Boots faster? My Windows 8.1 boots in less than 2 seconds. What about yours?

Re:Why OpenSSL is so popular? (4, Insightful)

swillden (191260) | about 5 months ago | (#46714633)

People will keep writing bad code, this is unavoidable, but what automated tests could be run to make sure to avoid the worst of it?

Automated testing for security problems doesn't really work. Oh, you can do fuzzing, but that's hit and miss, and general unit testing can catch a few things, but not much. Mostly, security code just requires very careful implementation and code review. Eyeballs -- smart, experienced eyeballs.

OpenSSL has terrified me for years. The code is very hard to read and understand, which is exactly the opposite of what's desired for easy review and validation of its security properties. It needs to be cleaned up and made as simple, straightforward and accessible as possible, or replaced with something else that is simple, straightforward and accessible. My theory on why it is the way it is -- and it's far from unusual in the crypto library world -- is that cryptographers tend not to be good software engineers, and software engineers tend not to have the cryptography skills needed.

I spend some (not very much, lately) of my time working on an open source crypto library called Keyczar that tries to address one part of this problem, by providing well-engineered crypto APIs that are easy to use and hard to misuse. That effort focuses on applying good engineering to the boundary between library and application code, which is another source of rampant problems, but Keyczar uses existing crypto libs to provide the actual implementations of the primitives (the C++ implementation uses openssl, actually). I've long wished we could find crypto libs that were well-engineered, both internally and in their APIs.

Re:Why OpenSSL is so popular? (3, Insightful)

Daniel_Staal (609844) | about 5 months ago | (#46715079)

In this case though, general unit testing should have caught the bug: There's an option at compile time which, if used, caused the affected versions of OpenSSL to crash. (Because it disables the bug, and OpenSSL was relying on it in one location...) So, good unit testing would have helped.

Basically, unit testing should be able to tell you if you've implemented the algorithm competently. It doesn't say if the algorithm is any good, just that your version of it works to the spec.

Re:Why OpenSSL is so popular? (2)

Bill_the_Engineer (772575) | about 5 months ago | (#46714649)

Why OpenSSL is so popular? It has FIPS-certified module, and this becomes important for selling your product to the government.

FIPS certification is for a specific compiled version of OpenSSL and is not a blanket certification. The only way you are FIPS compliant is if you document that your product uses the exact same compiled version of OpenSSL or you submit your version of OpenSSL to be certified.

Re:Why OpenSSL is so popular? (1)

jythie (914043) | about 5 months ago | (#46714667)

Well, it is popular because it is a generally well regarded and vetted package that supports a fairly rich set of cryptography tasks out of the box.

As for what could be done in the future? Well, automated tests really only cover cases you think about, and stress tests may or may not actually notice something. To a degree, there will always be things that slip through, and most of the time things are fixed and patched. In this case something unusually bad slipped through.

Re:Why OpenSSL is so popular? (4, Insightful)

MarcoAtWork (28889) | about 5 months ago | (#46714835)

it is a generally well regarded and vetted package that supports a fairly rich set of cryptography tasks out of the box.

I would see that as a drawback for using it in webservers: if I am writing something internet-facing I want to use the smallest and simplest possible library that does the job, maybe it would be time to fork openssl into openssl-core / openssl-extras and have openssl-core have only the most minimal set of functionality related to securing connections and that's it? I would honestly also only support a few platforms for -core to simplify the code analysis even more (the more ifdefs, the more possible issues)

Re:Why OpenSSL is so popular? (4, Insightful)

frank_adrian314159 (469671) | about 5 months ago | (#46714821)

First, make sure that code that must be secure is transparent. That means little (or no) optimizations, standard calls to OS functions, and clearly structured. It's clear that the OpenSSL developers made their code more opaque than was prudent and the many eyes of open source land could not see through the murk. Yes, clearer code would mean that it ran more slowly and some folks would need to run a few more servers, but the security problem might have been uncovered sooner (or not have happened) if someone hadn't thought that performance was a reason to make the code more complex.

Second, formal independent review would have helped. Most code (especially in volunteer-based open source projects) is only vetted by people directly on the development team. Any piece of software as ubiquitous and critical to the operation of today's internet as OpenSSL cannot have verification and validation mainly by its own developers. For software like this, where security is critical, you should have external review. Start an independent project that vets these things, folks.

Third, understand the limits of testing vs. design. More unit tests would not have caught this. Simple designs lead to simple and correct implementations. Complex designs (or no designs) lead to seas of unit tests that simply tells you the ways that the code happens not to be broken at the moment. Code like that in OpenSSL ideally should be simple enough to be formally proved correct.

I think we've known about why these sorts of things happen ever since I entered he field thirty years ago. We have ways to prevent them, but they usually take time, money, or lowered performance. That they are still happening because of performance zealotry, bad process, and "teh web-speed is everything" mentality is a black mark on our profession.

Hindsight is 20/20 (4, Insightful)

slyborg (524607) | about 5 months ago | (#46714489)

So, it's always great fun bashing "obvious" bugs, especially when they have such an impact, but let it be noted that thousands of implementers used openssl to build systems taking the package at face value despite these now "obvious" deficiencies in development process. If you were that concerned about security, they would have done what Google did, and audit the code. There are of course many practical reasons why people can't do that, but regardless, the blame arrow here points both ways.

Re:Hindsight is 20/20 (1)

sinij (911942) | about 5 months ago | (#46714657)

Google can afford to audit the code, and has reasonable expectation of meaningful results. The rest of us? Not so much.
 
Are there any decent automated code auditing tools that could be used by smaller shops?

Re:Hindsight is 20/20 (4, Insightful)

DMUTPeregrine (612791) | about 5 months ago | (#46714705)

OpenSSL's code is a mess. Go, read it.

Now that you're back from your stay in the sanitarium, would you like to consider that rewriting it might be a better choice than auditing? Yes?

Let's just make sure Nyarlathotep isn't on the dev team this time...

Re:Hindsight is 20/20 (1)

imikem (767509) | about 5 months ago | (#46715039)

Great power, great responsibility. Adding features to a piece of software as critical to so much infrastructure needs to be taken ever so seriously. Multiple levels of code audits for starters. Testing/fuzzing. I'm not qualified to do those things, but I still get to clean up the mess left by the whole fiasco.

So what is an alternative to OpenSSL? (0)

Anonymous Coward | about 5 months ago | (#46714509)

Forgive my ignorance, but what could someone use as an alternative?

Re:So what is an alternative to OpenSSL? (4, Interesting)

mr_mischief (456295) | about 5 months ago | (#46714589)

GnuTLS, which recently people were being told to avoid in favor of OpenSSL. You see, there was this bug...

Re:So what is an alternative to OpenSSL? (1)

Ignacio (1465) | about 5 months ago | (#46714895)

GnuTLS, NSS, Botan.

Re:So what is an alternative to OpenSSL? (1)

Aethedor (973725) | about 5 months ago | (#46715053)

Definitely PolarSSL [polarssl.org] .

How about ... (0)

Anonymous Coward | about 5 months ago | (#46714541)

How about he replies to the questions that the Slashdot community asked him some time ago?

He seems to go out of his way to piss people off ...

Re:How about ... (2)

EmagGeek (574360) | about 5 months ago | (#46714607)

He's just trying to be a bigger dick than Linus..

Re:How about ... (1)

Marginal Coward (3557951) | about 5 months ago | (#46714687)

Size matters?

Bug Looks Deliberate (5, Interesting)

Anonymous Coward | about 5 months ago | (#46714587)

That code is almost a text book example of material that is submitted to the Underhanded C contest...

http://en.wikipedia.org/wiki/Underhanded_C_Contest

Rev Numbers (1)

ISoldat53 (977164) | about 5 months ago | (#46714591)

I'm not a developer and don't know the convention for version numbers but shouldn't they make the rev number of the repaired OpenSSL something more distinctive than adding "g" to the third digit of the new code? Maybe change it to version 2.0.0 or something more obvious.

that's pretty standard (1)

Chirs (87576) | about 5 months ago | (#46714711)

Major numbers are often reserved for things that break backwards compatibility, minor numbers often denote new features, and the patch revision denotes patches.

Personally I would have bumped the patch rev (the third number).

Who is Theo De Raadt? (-1)

Anonymous Coward | about 5 months ago | (#46714619)

And why do we care about his opinion?

As always these names that most people in the tech community have never heard of.

Re:Who is Theo De Raadt? (2)

dingen (958134) | about 5 months ago | (#46714715)

Because who in the world has ever heard of OpenSSH, right?

Re:Who is Theo De Raadt? (4, Insightful)

TCM (130219) | about 5 months ago | (#46714757)

If you've never heard of him, you're not part of any important "tech community". Period.

Re:Who is Theo De Raadt? (3, Informative)

KingOfBLASH (620432) | about 5 months ago | (#46715205)

Theo De Raadt is the king of tinfoil hats, and behind OpenBSD -- a version of BSD designed to be as secure as possible.

You fai7 1t!? (-1)

Anonymous Coward | about 5 months ago | (#46714675)

Goals. It's when VISIONS GOING failure, its caorpse of BSD/OS. A

Really? (2, Interesting)

oneandoneis2 (777721) | about 5 months ago | (#46714725)

"it is how a few thousand of us feel about the whole thing"

Then maybe you thousands should stop complaining and start contributing to the project, which is so under-resourced problems like this are pretty much inevitable.

Wow (2)

Kazoo the Clown (644526) | about 5 months ago | (#46714765)

That's pretty scathing. I'd hate to be THOSE guys...

Unfortunately, this analysis seems to be spot-on (5, Insightful)

gweihir (88907) | about 5 months ago | (#46714799)

In addition, the mitigation countermeasures also prevent memory debuggers like Valgrind from finding the problem (Valgrind find use-before-init for malloc'ed blocks, but not if there is a wrapper in between that re-uses blocks), and may also neutralize code-security scanners like Fortify.

I have to admit that while my original intuition was "screwup", this looks more and more like some parts of the OpenSSL team have been compromised and did things that make this kind of bug far more likely. Like their own insecure memory allocation. Like not requiring time-of-use boundary checks or having any secure coding guidelines in the first place. Like documenting everything badly so potential reviewers get turned away. Like not having working review for patched or a working fuzz-testing set-up (which would have found bug this easily).

After all, the NSA does not have to sabotage FOSS crypto software. They just have to make sure the quality is low enough. The bugs they can exploit will follow. And the current mess is just a plain classic. Do enough things wrong and eventually stuff breaks spectacularly.

His rant could apply to almost any large project (3, Insightful)

PhrostyMcByte (589271) | about 5 months ago | (#46714863)

A lot of large performance-sensitive projects implement custom allocators in the form of arenas and freelists. Lots of platforms have a fast malloc implementation these days, but none of them will be as fast as this for the simple reason that the program knows more about its memory usage patterns than any general-purpose allocator ever could.

Not to say I can't understand Theo's point of view -- if he wants maximum security, then a program which bypasses one of his layers in the name of performance might not be the best for him.

On the flip side, the standards have no notion of such security layers and I feel it is perfectly reasonable for a team to not throw away performance in the interests of some platform-specific behavior. This was a bug, pure and simple. There's nothing wrong with using custom allocators. To say that "OpenSSL is not developed by a responsible team" is simply nonsense.

Re:His rant could apply to almost any large projec (3, Insightful)

JDG1980 (2438906) | about 5 months ago | (#46715009)

A lot of large performance-sensitive projects implement custom allocators in the form of arenas and freelists. Lots of platforms have a fast malloc implementation these days, but none of them will be as fast as this for the simple reason that the program knows more about its memory usage patterns than any general-purpose allocator ever could.

This is security software. You don't sacrifice the library's core functionality to make it run a bit faster on the old Celeron 300 running Windows 98.

Re:His rant could apply to almost any large projec (1)

PhrostyMcByte (589271) | about 5 months ago | (#46715109)

This is security software. You don't sacrifice the library's core functionality to make it run a bit faster on the old Celeron 300 running Windows 98.

malloc's core functionality is to allocate memory. Any security additions are platform-specific and irrelevant.

De Raadt is wrong (2, Interesting)

stephc (3611857) | about 5 months ago | (#46714875)

This is not a problem with OpenSSL, or the C Language or the Malloc implementation, this is a problem because everyone is relying on the same black box they do not understand. Because this is "standard" and common practice to use it. The only long term defense against this kind of vulnerability is software (and hardware?) diversity. Software built on custom SSL implementations may have even worse vulnerabilities, but nobody will discover them, and even if they do, it won't affect everyone on this planet. When I read Theo De Raadt, I fear his "solution" may only worsen the problem. We can't have all our secrets protected by the exact same door, no matter how strong the door is, once it's broken...

Re:De Raadt is wrong (2)

JDG1980 (2438906) | about 5 months ago | (#46714991)

This is not a problem with OpenSSL, or the C Language or the Malloc implementation, this is a problem because everyone is relying on the same black box they do not understand.

That's a cop-out. Any kind of advanced economy needs division of labor. This is no less true of the IT industry than anywhere else. The people building the "black box" need to know what they're doing and it needs to work. Period.

Re:De Raadt is wrong (1)

stephc (3611857) | about 5 months ago | (#46715319)

The people building the "black box" need to know what they're doing and it needs to work. Period.

But human nature prevent it, we know for quite a long time that software is never perfect and that security is never absolute. Diversity is the solution mother nature is using. I've wrote quite a lot of backend/server code, but I tend to use non-standard code to avoid vulnerability. Interoperability/Common Standards is a very good thing, but we don't have to all use the same implementation. Also, never trust something you don't understand.

Re:De Raadt is wrong (1)

tomhath (637240) | about 5 months ago | (#46715291)

I doubt "diversity" in this case is a good idea. Instead of one bug in one package you would end up playing bop-a-mole with many bugs in a few packages.

Whinging & Complaining (2)

Big Hairy Ian (1155547) | about 5 months ago | (#46714899)

is not helping!! What would be useful is a list of popular services affected by this issue the BBC is at least making a start here http://www.bbc.co.uk/news/tech... [bbc.co.uk]

Theo who? (1)

Aethedor (973725) | about 5 months ago | (#46715111)

Wasn't that the guy of the lamest vendor response [pwnies.com] in 2007? A little less harsh on your comment would be appropriate, mr. Theo.

Feeding Allo(g)ators (3, Interesting)

Anonymous Coward | about 5 months ago | (#46715351)

Allocators in this case make no significant difference with regards to severity of the problem.

What is or is not in process free list makes no difference when you can arbitrarily request any block of memory you please.. only slightly effects chance of success when it becomes necessary to shoot in the dark. Lets not forget most OS provided optimized allocators keep freed memory in their heaps for some time as well and may still not throw anything when referenced.

Looking at code for this bug I am amazed any of this garbage was accepted in the first place. There is no effort at all to minimize chance of error with redundant + 3's and 1 + 2's sprinkled everywhere complete with unchecked allocation for good measure.

buffer = OPENSSL_malloc(1 + 2 + payload + padding);
 
r = dtls1_write_bytes(s, TLS1_RT_HEARTBEAT, buffer, 3 + payload + padding);

Suppose I should be glad 1 + 2 = 3 today and they have not used signed integers when dealing with lengths.

unsigned int payload;
unsigned int padding = 16; /* Use minimum padding */

... oh dear god ...

int dtls1_write_bytes(SSL *s, int type, const void *buf, int len)

Well at least they learned their lesson and have stopped sprinkling redundant and error prone type + length + padding garbage everywhere... see..

+ buffer = OPENSSL_malloc(write_length);
 
- buffer, 3 + payload + padding,
+ buffer, write_length,

and here ..

+ if (1 + 2 + 16 > s->s3->rrec.length)+ return 0; /* silently discard */
 
+ if (1 + 2 + payload + 16 > s->s3->rrec.length)+ return 0; /* silently discard per RFC 6520 sec. 4 */
 
+ if (1 + 2 + 16 > s->s3->rrec.length)+ return 0; /* silently discard */
 
+ if (1 + 2 + payload + 16 > s->s3->rrec.length)+ return 0; /* silently discard per RFC 6520 sec. 4 */

... oh well .. Looks like plenty of low hanging fruit to be had for anyone with a little spare time.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>