Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SSL/TLS Vulnerability Widely Unpatched

Soulskill posted more than 3 years ago | from the never-put-off-until-tomorrow-what-you-can-forget-entirely dept.

Encryption 103

kaiengert writes "In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable. In February 2010 researchers published RFC 5746, which described how servers and clients can be made immune. Software that implements the TLS protocol enhancements became available shortly afterwards. Most modern web browsers are patched, but the solution requires that both browser developers and website operators take action. Unfortunately, 16 months later, many major websites, including several ones that deal with real world transactions of goods and money, still haven't upgraded their systems. Even worse, for a big portion of those sites it can be shown that their operators failed to apply the essential configuration hotfix. Here is an exemplary list of patched and unpatched sites, along with more background information. The patched sites demonstrate that patching is indeed possible."

Sorry! There are no comments related to the filter you selected.

WHAAAA ?? (0)

Anonymous Coward | more than 3 years ago | (#36504924)

Unpatched vuln ?? HBw can this be in this day and age ?? !!

Wells Fargo Bank is ONE! (2)

al0ha (1262684) | more than 3 years ago | (#36506746)

Last week my Father sent me the debug output of a NoScript dump where Abe detected potential XSS while he was connecting to Wells Fargo Online Banking - the XSS was bogus but among the other messages was this disheartening line printed over and over again:

www.wellsfargo.com : server does not support RFC 5746, see CVE-2009-3555

Almost as lame as Citi's CC numbers in the URL string.

Bank of America won't address this (1)

slashdotard (835129) | more than 3 years ago | (#36541198)

When asked about this, Bank of America was unresponsive.

As usual.

Bank of America has had a long history of negligence with regard to online security, ever since they opened up web access to account holders.

I recall that firefox detects this (2)

roguegramma (982660) | more than 3 years ago | (#36504932)

I recall that firefox detects this, so using firefox + firebug and checking the console might tell you if a site is vulnerable.

Re:I recall that firefox detects this (1)

nzac (1822298) | more than 3 years ago | (#36505440)

According to the firebug console my bank is unpatched.

[bank].co.nz : server does not support RFC 5746, see CVE-2009-3555

Method:
Install firebug and open it to the console window
Enable and check all outputs except JavaScript warnings (not sure which is right one) and reload the page.

Re:I recall that firefox detects this (0)

Anonymous Coward | more than 3 years ago | (#36509014)

and thats why firefox>x

Re:I recall that firefox detects this (0)

Anonymous Coward | more than 3 years ago | (#36510294)

What is the point if Firefox does not present this information to a user? This seems to me like a big mistake on Firefox's part. When visiting one of the effected sites, Firefox states "You connection to this web site is encrypted to prevent eavesdropping", and hidden away in the error console it states "storage.adobe.com : server does not support RFC 5746, see CVE-2009-3555". If this vulnerability allows eavesdropping via a man-in-the-middle attack, then surely Firefox should NOT be stating that the connection is secure.

Re:I recall that firefox detects this (1)

kaiengert (209147) | more than 3 years ago | (#36510588)

As you can see from the list, Firefox would have to warn quite often. The worry is, if you warn too often, your users will ignore it.

However, you are encouranged to change a preference. Use about:config and change security.ssl.treat_unsafe_negotiation_as_broken to true. This way, when you visit an unpatched site, you'll no longer get good security indicators. If you think this is a good idea, you could vote for this at https://bugzilla.mozilla.org/show_bug.cgi?id=665859 (please use the vote feature, and limit comments to interesting thoughts).

If you want to disable all connections to unpatched sites, you can set security.ssl.require_safe_negotiation to true, but unfortunately, as can be seen from list, you'd lock yourself out of a lot of sites.

Unfortunately, as explained, Firefox cannot easily distinguish between "unpatched and vulnerable" vs. "unpatched but partially hotfixed" vs. "unpatched and completely hotfixed". It's simply not possible when using the old TLS protocol version. Probing the sites would result in a slower experience, and probing cannot produce definitive results anyway.

I believe the right approach would be to present an error page whenever you connect to an unpatched server, but if absolutely desired, allow the user to add an override, similar to what Firefox currently does with sites that use untrusted certficates. The user interface for such a feature has not yet been implemented in Firefox. The discussion is happening in https://bugzilla.mozilla.org/show_bug.cgi?id=535649 - However, as of today, you can manually get a similar experience, by using advanced preferences, combining security.ssl.require_safe_negotiation = true and manually adding override sites to security.ssl.renego_unrestricted_hosts

More detailed explanations can be found at https://wiki.mozilla.org/Security:Renegotiation

Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36504950)

Anyone have a link to a simple test script that I could use to check the sites of our suppliers (and to verify our own server's configurations before)? None of the linked articles mention one that is readily available.

Re:Self test? (0)

Anonymous Coward | more than 3 years ago | (#36505034)

https://www.ssllabs.com/ssldb/analyze.html?d=slashdot.org

This is lookin

Re:Self test? (5, Informative)

Mysteray (713473) | more than 3 years ago | (#36505040)

I like Qualys' SSL Labs [ssllabs.com]

Re:Self test? (0)

Anonymous Coward | more than 3 years ago | (#36505120)

Oh great, my bank, bbt.com, fails wonderfully. .......... :-|

Re:Self test? (1)

Mysteray (713473) | more than 3 years ago | (#36505330)

Email them and ask why they haven't applied the fix for CVE-2009-3555!

Note that "not supporting secure renegotiation" doesn't necessarily mean that the site itself is insecure, it means that the browser is unable to determine if it is or not. The degree to which this is a meaningful distinction is a really interesting discussion.

But it does suggest that they have a really clueless vendor or they haven't applied security patches in a long time.

Re:Self test? (0)

Anonymous Coward | more than 3 years ago | (#36510836)

This is the result when I tested FirstDirect using the site https://www.ssllabs.com/

------------------
SSL Report: www2.firstdirect.com (193.108.77.88)
Assessed on: Tue Jun 21 11:21:20 UTC 2011

Summary
Overall Rating A (85)
Certificate 100
Protocol Support 85
Key Exchange 80
Cipher Strength 90

This server is vulnerable to MITM attacks because it supports renegotiation (more info here).
-----------------

I phoned FirstDirect and spoke to their technical team. They said they would investigate and call back. They have since called back and informed me:

[approximate quote from memory]
"yes they are vulnerable to a man-in-the-middle attack, but they are happy with the security that the Internet banking site provides, but I don't have anything to worry about as I am fully covered under their Internet banking guarantee".

She also pointed out that other banks are also vulnerable to this attack.

Re:Self test? (0)

Anonymous Coward | more than 3 years ago | (#36505320)

And yet, it only supports DHE with certificates that are too small to be used anymore.

Sun's Java crypto library is REALLY creaky.

Re:Self test? (1)

Clueless Moron (548336) | more than 3 years ago | (#36507340)

And if you have a self-signed certificate, you fail with an F no matter how good everything else is.

So once again it's a site implying that unencrypted plain http is somehow better than an TLS connection that happens to be using an unsigned certificate.

And firefox is still raising these big scary alerts about unsigned certs encouraging people to use unencrypted http instead of https. Just goddamned brilliant.

It's the doorlock equivalent of raising a stink that since you don't have a triple deadbolt, just don't lock the door at all.

Re:Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36510474)

And if you have a self-signed certificate, you fail with an F no matter how good everything else is.

Because with a self-signed certificate you are vulnerable to MiTM attacks.

So once again it's a site implying that unencrypted plain http is somehow better than an TLS connection that happens to be using an unsigned certificate.

A self-signed certificate gives a false sense of security, so plain http is better. Here is the breakdown:
* Proper signed certificate with verifiable trust chain: protects against active MiTM attacks and passive eavesdropping so the user has all the protection that HTTPS can offer
* Self-signed certificate: will protect from casual eavesdropping but does not protect against MiTM attacks because a MiTM attacker can fake the cert easily
* Plain HTTP: does not claim to protect at all in that way
If your browser allowed a certificate without verifiable trust chain to pass then the user would not know any difference so would not know they were not protected from all the things that HTTPS with a proper trust chain protects them from. This would not be a GoodThing(tm)

And firefox is still raising these big scary alerts about unsigned certs encouraging people to use unencrypted http instead of https. Just goddamned brilliant.

No. The browser is encouraging sites to use properly signed certificates instead of self-signed ones, instead of trying to pretend to the user that a self-signed cert protects them in the same way a cert with verifiable trust chain does. Sites that refuse to do so are encouraging people to use http instead, not the browser. If you want to use https on your sites, then use it properly and get a proper certificate, they can be obtained at no financial cost (or, if you must have absolutely 100% browser coverage including people who have not installed certain Windows updates under XP+IE, at minimal financial cost).

It's the doorlock equivalent of raising a stink that since you don't have a triple deadbolt, just don't lock the door at all.

No. Allowing the use of a self-signed certificate instead of the hassle of getting one with a verifiable trust chain is the door-lock equivalent of letting the person who installs your doors make a cheap simple easily picked lock look like a triple deadbolt lock, so the user thinks their stuff are protected by a triple deadbolt lock when it is actually protected by something much more flimsy.

If you want to use HTTPS, use it properly (which means having a certificate signed by a CA trusted by the users' browser) or your users will be warned that you are not using it properly. If you really must self-sign in a live environment for some reason, create your own CA key+cert and have your users install that as a trusted CA in their browsers (though this won't wash for a public site (good luck getting everyone to trust you to potentially sign certs for any name!) it is often done in corporate environments).

Re:Self test? (1)

muckracer (1204794) | more than 3 years ago | (#36510628)

> A self-signed certificate gives a false sense of security, so plain http
> is better

[...]

Sorry bro...1995 called and wants its arguments back.

On surface level you sound correct and reasonable. And yet, it's still total nonsense.
Not to be too personal....most of the nonsense is really in the SSL model as used (like trusting people you have no reason to trust) and browsers by extension implementing that messed up model.

To make it short:
a CA-signed certificate does not protect you from MITM-attacks. Why? Because every TLA (and other folks) worth its salt will have perfectly acceptable CA-signed certs, which they can use to inject themselves into the "trust"-chain, i.e. your session.
Whereas a self-signed certificate does not automatically make you vulnerable to an MITM. It depends entirely, whether the fingerprint of the presented cert is distributed through some other, out-of-band, channel. In fact, it can even be more secure if you use it right.

The rest is, well, just as silly. For example, the much-touted "sense-of-security" users have is a complete pipe-dream of developers. IT DOES NOT EXIST!!
We tried "training" users to watch for the lock that tells them, that all is well. Now we don't even show them a lock anymore! How's that for consistency? And with the multiple warnings they get every day, your user will not wonder about anything anymore. They will click on whatever it takes to get to the page! Even the wrong one.
In Sum: SSL, as you presented it so well, is a complete failure and needs to be scrapped for some different model.

Re:Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36511208)

Not to be too personal....most of the nonsense is really in the SSL model as used (like trusting people you have no reason to trust) and browsers by extension implementing that messed up model.

So your argument is essentially that as the browser's trust by default a number of people you have no particular reason to trust (the current holders of generally accepted CA certs), we should just let go and trust everybody in the whole world? The trust model might be a bit broken, but the answer is not to break it completely and just give up on it without having a suitable replacement.

Whereas a self-signed certificate does not automatically make you vulnerable to an MITM. It depends entirely, whether the fingerprint of the presented cert is distributed through some other, out-of-band, channel. In fact, it can even be more secure if you use it right.

You seem to be proposing the model used by SSH. This falls down on the key exchange issue though. While the browser can verify if the fingerprint of the certificate changed since last time I used the site and warn me that the certificate may be fake as the fingerprint does not match the recorded on, how does that protect me first time I visit the site? There is no recorded fingerprint for it to compare against unless I've obtained one from some other secure source, so we need to setup some trusted secure source for the initial fingerprints and we need a way to verify that can be trusted too.

They will click on whatever it takes to get to the page! Even the wrong one.

Sod them then. If they will ignore every warning, fail to consider any concern we raise, and so on, then there is nothing we can do to help them. Let them be their own worst enemy.

But I don't ignore warnings, I don't say yes to any old thing just to get access to some bit of porn or celebrity gossip, and neither do other sensible people (even non-technical people). Why should we lessen our security, or make us make effort to manually verify each certificate by some other means, in order to remove a warning that the fools ignore anyway? Give them the warning and let them ignore it if they wish, and I'll take what action I feel appropriate (ignoring the warning if, for instance, I signed the cert or moving on elsewhere if I choose not to trust it).

In Sum: SSL, as you presented it so well, is a complete failure and needs to be scrapped for some different model.

If someone has a better model, and there are problems with the current model as implemented, then they should propose it as a new standard, or develop and demonstrate it themselves and perhaps make licensing money out of the idea. Trusted key/fingerprint exchange is a very difficult nut to crack outside of controlled systems (i.e. on public networks) though, so good luck solving that one. If someone does come up with a new method that genuinely solves the problems of the existing methods without introducing new nasties, then fame (academic fame at least, though perhaps not the "and fortune" sort) awaits them.

Suggesting "the warning messages are inconvenient, remove them" is not proposing a new model. It is removing useful features from the old one in order to remove something you find irritating about the old one.

Re:Self test? (1)

Clueless Moron (548336) | more than 3 years ago | (#36523450)

The MitM vulnerability is only there the first time you connect to a self-signed certificate site. After that you're fine.

With plain http, you are vulnerable every time you connect to the site. They don't even need MitM; they can just sniff the session. This is absolutely terrible.

Folks like me who run small sites use self-signed certs all the time because I don't want to pay the extortion fees of the CA keepers, who seem to dole out certificates without really checking anyway.

Self signed certs are not ideal, but they are definitely better than plain http. All I ask is that the browsers don't make quite such a big fuss about them. They could simply say "This connection is using weak encryption. No bank or large institution would do this. [Ok this time] [Cancel] [Ok forever]" instead of the big fuss that Firefox currently does. That message isn't entirely accurate, but it more or less explains it to a normal user. If it's just some free webmail site (like mine), that's fine.

To put it another way, for your argument to be consistent every plain http connection should start with a big "Warning! This connection is unsecured!" dialog. The way things are now, users think self-signed https connections are less secure than plain http connections which is ridiculous.

Re:Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36533242)

The MitM vulnerability is only there the first time you connect to a self-signed certificate site. After that you're fine.

Which is why Firefox's behaviour is correct. It can not verify the certificate against any of the CA certs it has been told to trust first time it see it, so it warns you that it has no idea if the cert is valid or it isn't and you are about to connect to EvilCorp pretending to be GoodieTwoShoes Inc. You can then inspect the certificate and confirm it is genuine using information gleaned from a known secure source and add a permanent exception for it so you won't be bothered when connecting to that site in future.

Not accepting self-signed certificates automatically is not anything to do with protecting against an MitM affecting a site with a self-signed cert - it is about protecting sites with properly signed certs (see below).

With plain http, you are vulnerable every time you connect to the site. They don't even need MitM; they can just sniff the session. This is absolutely terrible.

Correct, but accepting self-signed certificates without warning is even worse because it allows a MitM to pretend to be a site with a properly signed cert on first connection and the user will never know. If your browser connects to https://securebanking.ltd/ [securebanking.ltd] and gets a self-signed certificate from a MitM instead of the one the site actually has signed by a "trusted" CA, how will it know the difference between this malicious self-signed certificate and a "genuine" new one that the site has installed? You could suggest that the browser warns when a certificate changes (or when its type changes from "verifiable using trusted CA public keys" to not), but that would still leave sites with properly signed certificates vulnerable to MitM attacks on first connection (and once a malicious self-signed cert is connected, every connection after that). I for one would much rather keep what trust is enabled by a properly signed certificate rather than lose that just because the checks and warnings are inconvenient for sites that want to self-sign.

Folks like me who run small sites use self-signed certs all the time because I don't want to pay the extortion fees of the CA keepers, who seem to dole out certificates without really checking anyway.

Folks like you could (well, you can't at the moment as they are down due to a security breach, but that is a different discussion!) use the free certs from startssl.com which are accepted by almost all common browsers (the exception being a small proportion of the people that use IE6/7/8 under Windows XP (those that have no installed the root cert update patches from Windows Update, which are present in Vista+ by default)). Or you could get a cert for $9.99 from a number of places (or slightly cheaper by abusing certain "free SSL cert with a new domain name" offers). Is $9.99/yr really that extortionate to protect the information being read from or submitted to your services?!

If your audience is so small that you $9.99 or the minor hassle of registering with startssl and using their control panel to generate a cert is too much in exchange for protecting them/you with a trusted signed certificate, then your audience is small enough that you can ask them all to install your CA cert into their trusted list and then anything you sign with it is fine.

With regard to "don't check anything anyway": they at very least verify that the person requesting the certificate has access to one of certain admin accounts associated with the domain in question (for higher assurance certs other checks are made, though as we are discussing the difference between cheap/free properly signed certs and self-signed ones higher assurance certs and the checks that they require are probably not relevant).

Self signed certs are not ideal, but they are definitely better than plain http.

Yes. But being better than plain http in some cases does not mean that they are good enough to warrant breaking the extra security offered by a certificate signed by a trusted CA just to make using them more convenient.

All I ask is that the browsers don't make quite such a big fuss about them. They could simply say "This connection is using weak encryption. No bank or large institution would do this. [Ok this time] [Cancel] [Ok forever]" instead of the big fuss that Firefox currently does. That message isn't entirely accurate, but it more or less explains it to a normal user. If it's just some free webmail site (like mine), that's fine.

Unfortunately users are thick for the most part and would just blindly click OK to such a mild warning without even thinking, and if they clicked OK to the mild warning and it turned out it wasn't a valid cert and they had just sent their info to a nefarious group they would blame the browser (or their intended destination site) for any repercussions rather then themselves. A stark in-your-face warning is the only way to get through to many people in cases where there may be a threat and because the browser has no idea whether a given signed-by-an-untrusted-CA cert is a threat or not it has to give the stark warning in both cases for the sake of safety.

To put it another way, for your argument to be consistent every plain http connection should start with a big "Warning! This connection is unsecured!" dialogue. The way things are now, users think self-signed https connections are less secure than plain http connections which is ridiculous.

Self-signed https is no safer than plain http unless the user has verified the certificate by some other means at least once, perhaps by having your CA cert in their trusted list (if they trust you enough, as doing that would potentially allow you to MitM Google or their bank) or by verifying the certificate's fingerprint against information you have transmitted to them by some other secure method. That is what the stark warning is about. The browser is saying "I have no reason to automatically trust this cert so it could be very dodgy, please verify it yourself or back away". Just clicking "OK" in response to a mild warning does not constitute verifying the validity of the certificate.

Making it easier to use self-signed certs generally is not the answer as it reduces the security of signed-by-trusted-CAs certs.

Re:Self test? (1)

Clueless Moron (548336) | more than 3 years ago | (#36537234)

Well I suppose I should look on the bright side.

Since browsers make such a fuss about self-signed certs, all webserver installations default to plain http. They could generate a self-signed cert on install and serve https by default (redirecting http to https), but this is unworkable thanks to browsers treating self-signed worse than unencrypted.

Thanks to this, I can sniff my LAN and haul in gobs of login/password combination (like from slashdot.org, which doesn't support https), since the vast majority of websites use plain http since it takes an effort to use https.

I don't have to get frustrated by the encrypted self-signed connections that I can't even MitM, because invariably they've already used them previously outside my LAN and so any MitM attempt I try will throw up huge screaming warnings in the browser.

So ok, have it your way. Things are actually pretty awesome this way. It should take effort to have encryption, because everybody sets up fancy MitM systems but absolutely nobody sniffs LANs with wireshark.

Re:Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36552808)

Since browsers make such a fuss about self-signed certs, all webserver installations default to plain http. They could generate a self-signed cert on install and serve https by default (redirecting http to https), but this is unworkable thanks to browsers treating self-signed worse than unencrypted.

Web servers do not default to HTTPS because self signed certs are not (rightly) convenient to use in the wild, until about five or six years ago (before people realised how easily properly certificated sites could be MitMed that way if you manage a DNS poisoning attack first) browsers didn't make the big fuss and HTTPS-with-self-signed-certs wasn't done by default then. The default to HTTP because HTTPS is not needed for most content. Back in the day when I were nowt but knee 'igh t' grass'opper the extra processing was expensive CPU wise, and now even though CPUs are at the point where the extra computation server-side is neither here nor there (as a modern CPU can saturate any line you have it throw content down, if you can afford a line faster than your current CPUs can cope with you can afford better CUPs too) I would not recommend HTTPS for all content. It adds latency (Google have done some good work reducing this, but that hasn't been implemented elsewhere much that I know of) which while insignificant with a fast round-trip time is quite noticeable if you are a mobile or satellite link, it removes the possibility of transport level compression (which can be a real boon if on that mobile link in a an area where the best signal you can get is basic GPRS), it removes a lot of caching potential (this will increase the load on servers and their bandwidth), and so forth.

Also your automatic self-signed certificates would have no revocation method (as their nature makes this impossible to setup in a way that isn't easily DoSed) so if they are set to not expire for 10 years means that if your certificate is somehow compromised the black-hats have a valid certificate (that your users have told their browsers to trust) that they can potentially use for some time to come. If the self-signed certificates have a decent expiry time (and it would need to be lower than the year common for trusted-CA-signed certs to account for the can't-be-revoked problem) then your users are going to get the honking "certificate changed! you could be under attack!" message every time your certificate needs to roll over. (though one caveat there is that I don't think revocation is properly implemented in many clients as it currently stands anyway...)

Thanks to this, I can sniff my LAN and haul in gobs of login/password combination (like from slashdot.org, which doesn't support https), since the vast majority of websites use plain http since it takes an effort to use https.

If you are going to sniff traffic on your LAN then you might well try MitM self-signed SSL sites. If I don't trust your network for sending particular content over plain HTTP then I don't trust your network for sending the same content via a self-signed site. If I don't care about that traffic going via self-signed HTTPS (unless I've personally verified the certificate by other means) then I'm probably happy for it to go over plain HTTP too.

I don't have to get frustrated by the encrypted self-signed connections that I can't even MitM, because invariably they've already used them previously outside my LAN and so any MitM attempt I try will throw up huge screaming warnings in the browser.

Only if they are using their own machine on your LAN, or are using the same machine they've always used (not switching machine or browser without transferring certificate stores), or not using the site for the first time. While there are other reasons to distrust machines that aren't your own (physical key loggers and such) that doesn't mean we should ignore the self-signed-cert problem because there are other ways to capture the data.

So ok, have it your way.

Thanks. We will.

Things are actually pretty awesome this way. It should take effort to have encryption, because everybody sets up fancy MitM systems but absolutely nobody sniffs LANs with wireshark.

As more and more information of greater importance (or at least greater scale) is accessed over the Internet, the black hats are getting more and more sophisticated. Just because MitM attacks of self-signed HTTPS have been rare until now does not mean they would be for ever. If the easy pickings from plain http being used when it shouldn't went away because everything used self-signed HTTPS you can bet your bottom dollar that the nefarious types would start targeting the self-signed with a vengeance. There have been high-profile DNS poisoning attacks in recent years, they only went for http traffic because that low hanging fruit was so abundant (and for the most part they were just injecting paid ads and malware rather than trying to sniff authentication credentials or bank details directly), but that would still be a usable injection route for such an attack if your DNS is vulnerable due to some new found bug.

Oh, and if an attacker gets into your wired network (or your ISP's) to the point where they can run wireshark on a router (because on a switched network you can't just use promiscuous mode to look at traffic to/from other nodes as you can on a hub-based network like wireless, so you need to be into a machine that the traffic normally passes through) you are in a position to try to MitM bad SSL sites. It might be a little harder (than just running wireshark) to do first time around, but once done the exploit will be packaged up and sold then eventually leak out more generally so every script kiddie can give it a go.

Self-signed HTTPS is not the answer. It would at best make things a little harder for the black-hats. The answer is two-fold:
1. user education: Things like not giving important data to any old site, not using the same password for everything they do, not clicking OK/accept to anything just because they want what the cute cat photo or video of a famous boob, and so forth
2. site owners taking responsibility: if you want to hold data that could be valuable enough to be sniffed pony up for a cheap cert (or a free cert, but as startssl are currently down and there are no other CAs offering free certs against a commonly trusted CA cert won't be possible for a short while). Not having a properly signed certificate, unless you have arranged some way for your users to securely verify the certificate and have some revocation method in place in case the cert itself is compromised is worse than storing passwords and in plain text.

Re:Self test? (1)

Clueless Moron (548336) | more than 3 years ago | (#36560798)

Gah, what a lot of verbiage. Please let's keep this brief

Here is a typical and realistic scenario instead: You connect to my site, and being the paranoid type you prefer https. You get a self-signed cert. Would you grumpily accept it or would you go to http? I'd imagine the former.

Next, you bring your laptop to a Starbucks, where for all you know some bastard has stuck a hub and plug computer on the router when it was all installed and the proprietor has no idea. You want to use my site again. Would you use http or https?

Consider that with http you are guaranteed to be sniffed without even knowing it. With https you cannot, and if they got fancy and tried MitM the browser would raise a stink.

This scenario faces me all the time. So I'm quite happy with the self signed cert. If I've missed something big, please do tell

Re:Self test? (1)

asdf7890 (1518587) | more than 3 years ago | (#36572342)

Here is a typical and realistic scenario instead: You connect to my site, and being the paranoid type you prefer https. You get a self-signed cert. Would you grumpily accept it or would you go to http? I'd imagine the former.

Neither. If I didn't care who might snoop what I'm about to read/post I might go with either http of let the self-signed in (though just this once, not as a permanent exception), though obviously not for the webmail example you quoted above as that would involve a username+password and other content I might care about. If I did care about the info and if (and only if) I was expecting a self-signed certificate and had a fingerprint from a secure source to check against or some other way to verify the cert, then I might let the cert stand. I might even add a permanent exception. If I did care about the info and didn't have a way to verify the cert I would be on my way. If I had paid anything for your service I would consider it unfit for purpose and be quite irritated.

Next, you bring your laptop to a Starbucks, where for all you know some bastard has stuck a hub and plug computer on the router when it was all installed and the proprietor has no idea. You want to use my site again. Would you use http or https?

In the unlikely event that I had previously added a permanent (until expiry) exception for the current certificate https would not be an issue. If I didn't care about what I'm about to post (including any auth credentials and session keys) or read then either https/http would do. If I did care about the info/credentials in any way and I didn't have an exception set (or the site didn't have a properly signed cert) then I would simply not use the service on that occasion. And if your service is inconvenient to me for this reason I would make a point of finding an alternative.

Consider that with http you are guaranteed to be sniffed without even knowing it.

Actually not, as I would not let anything I cared about (authentication credentials, mail, even just session keys (sites that take logins via http then switch back to http really are showing they don't have the first clue) and so forth) go over http and no one should be encouraged to do otherwise. It shouldn't be a question of "secure isn't working right, I'll use insecure" it should be "secure isn't working right, oh well I'll have to do something else".

Actually not for another reason too: I never use public wireless without all my traffic going through a VPN (though that is not relevant to this discussion as most people-on-the-street would not have, not care to have, such an arrangement).

This scenario faces me all the time. So I'm quite happy with the self signed cert. If I've missed something big, please do tell

I've done tell. More than once. You've asked for brevity so I'll not repeat myself again.

I'm intrigued as to how you face this situation regularly.

Unless you are talking about certificates that you have signed yourself or properly verified against a fingerprint obtained by other, secure, means rather than just clicking OK then fine. But otherwise you should not be accepting self-signed certificates for any https communication that you care about and you certainly shouldn't encourage other people to do so because even if it is OK once it might not be later and after you've shown some people how to accept one self-signed certificate many will just accept any they get from anywhere in future. To top it off, they'll blame you if anything untoward does happen to them after they accepted a self-signed cert from what appeared to be their bank - because you told them accepting self-signed certificates was fine (I'm currently having memories of trying to explain to a friend why that keylogging search toolbar should not have been allowed in and not getting through to him that "Peter saying it was OK when we installed the AVG one" was not relevant to every site or app offering to install a toolbar).

Also, if you are running a service for which you think transport-level encryption is needed/recommended, why are you also offering an in-the-plain access method anyway?

If you are getting self-signed certificates from services you pay for or provide content for or, well, care about in the slightest, then you should encourage (by saying things like "sort it or I'll go elsewhere") those services to get serious about security if they are going to bother at all. An SSL cert is not expensive. An SSL cert is not difficult to obtain or install (if someone finds it difficult, then I'm not comfortable trusting any service running on their kit anyway). Self-signed certificates were never intended to be used on live sites, and should not be used on live sites unless all the users who you expect to accept the certificate know that they should properly verify the fingerprint.

Re:Self test? (1)

Clueless Moron (548336) | more than 3 years ago | (#36575872)

I will repeat myself:

Self signed certs are not ideal, but they are definitely better than plain http. All I ask is that the browsers don't make quite such a big fuss about them. They could simply say "This connection is using weak encryption. No bank or large institution would do this. [Ok this time] [Cancel] [Ok forever]" instead of the big fuss that Firefox currently does. That message isn't entirely accurate, but it more or less explains it to a normal user. If it's just some free webmail site (like mine), that's fine.

The above is all I want. To summarize:

Signed TLS is terrific.

Self-signed TLS is less so.

Plain http is terrible.

My entire complaint is that browsers are currently not reflecting this. They are reversing the last two. If you maintain that plain http is equal or better than self-signed TLS then we have nothing more to disuss.

Re:Self test? (1)

Billlagr (931034) | more than 3 years ago | (#36509050)

Interesting..My bank is marked down because it supports both patched and unpatched. Really, there's no need to support the unpatched one.

Re:Self test? (4, Informative)

caljorden (166413) | more than 3 years ago | (#36505370)

I spent a few minutes looking for the same thing, and found that Firefox includes a check. If you visit an HTTPS site that is not secure, you will get a message in the Error Console under Messages saying something like this:

site.example.com : server does not support RFC 5746, see CVE-2009-3555

For more information, see https://wiki.mozilla.org/Security:Renegotiation [mozilla.org]

Re:Self test? (0)

Anonymous Coward | more than 3 years ago | (#36506844)

Also, http://netsekure.org/2009/11/tls-renegotiation-test/

Not as surprising as it should be (4, Insightful)

Tarlus (1000874) | more than 3 years ago | (#36505000)

Unfortunately, 16 months later, many major websites, including several ones that deal with real world transactions of goods and money, still haven't upgraded their systems. Even worse, for a big portion of those sites it can be shown that their operators failed to apply the essential configuration hotfix.

Lately we've also been finding out that many major websites are storing passwords as plain text and are untested against SQL injection. So it's unsurprising that they're also unpatched.

Web servers need to be actively watched, maintained and scanned for vulnerabilities. Just because it's a LAMP server doesn't mean it's rock-solid. The fire-and-forget philosophy does not apply.

Re:Not as surprising as it should be (2)

Hyppy (74366) | more than 3 years ago | (#36505060)

Lately we've also been finding out that many major websites are storing passwords as plain text and are untested against SQL injection. So it's unsurprising that they're also unpatched.

Web servers need to be actively watched, maintained and scanned for vulnerabilities. Just because it's a LAMP server doesn't mean it's rock-solid. The fire-and-forget philosophy does not apply.

The problem is generally far beyond the necessary LAMP or IIS patching: The vulnerabilities you describe are flaws in the site's design and code. You can't patch a stupid divaloper.

Re:Not as surprising as it should be (4, Funny)

Anonymous Coward | more than 3 years ago | (#36505196)

You can't patch a stupid divaloper.

Diva-loper:
1. (n) A portmanteau [wikipedia.org] of diva and interloper. It describes a software developer (or programmer) who believes themselves to be excellent at their craft, while an independent review of their developed code will demonstrate that the person has no business touching a computer.
2. (n) A singer (diva) who gets married in Vegas (elopes).

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36505314)

lol

Re:Not as surprising as it should be (3, Insightful)

Anonymous Coward | more than 3 years ago | (#36505512)

Well that depends. Some orgs the developers show up build the system then it is up to operations to keep it going. Or contract it out. Or some 3rd party 'handles it'. Some are afraid to change anything. Others have stacks of legal hurdles to jump thru. Others have SLA's they have to keep up with...

A developer who saw it would say 'yeah just fix and reboot'. But the IT org would say uh we have about 2 mins downtime per year it will happen in six months.

Re:Not as surprising as it should be (1)

dgatwood (11270) | more than 3 years ago | (#36506876)

The competent IT guy would say, "Okay. Let's take one machine temporarily off of the load balancer's list. Patch that one up. We'll reintroduce it into the farm when you're done. If there are no problems with it, we'll update the remaining machines a few at a time."

Don't get me wrong, that doesn't always work. For example, it's a lot harder if you're trying to introduce changes that affect the actual client content (e.g. JavaScript fixes). For everything else, there's MasterCard... or something....

Re:Not as surprising as it should be (1)

Sproggit (18426) | more than 3 years ago | (#36510778)

Actually, you're referring to an IT guy with a competent Manager / Director, who would have ensured that a decent load balancing / ADC solution was in place...
Now that I think about it, if they used a decent load-balancer / ADC, they would probably be doing SSL termination on the device, so the higher up would have to be even more competent, and ensured the devices were purchased / installed in a fail-over pair, with connection mirroring and persistence mirroring enabled, meaning that the standby device can get patched, seamless fail-over, double check, newly standby box patched, and then fail back or let it run, as per fail-over preference policy...

The Sproggg

Re:Not as surprising as it should be (3, Insightful)

morcego (260031) | more than 3 years ago | (#36505596)

Besides the obvious "stupid divaloper" joke, and I will refrain from making, I agree the problem is much bigger.

It is what I call the "Windows Mentality". "So simple anyone can do it" is another way of stating it.

Companies (Microsoft is a leader on this) sell their software as something extremely simple, that anyone can install, run and maintain. And, for someone who doesn't understand it (a good number of managers, CEOs and director), it actually looks that simple. Well, guess again ? It is not. I'm sorry, but your 17 years old intern (hourly rate = 1 candle bar) can't install and maintain a good server. You need to have someone who actually knows what he is doing, has the experience and the knowledge to do it well. Oh ? Too expensive, is it ? Really ? I suppose you take your Porche to be serviced buy that tatooed guy at the gas station too ?

Nope, sorry. There are no simple server. Windows, Linux (LAMP or otherwise). They all require skilled admins, skilled coders and skilled designers. And those cost money. They require regular and constant maintenance. In other words: money.

That is the real problem. Most companies are just cheap.

Re:Not as surprising as it should be (2)

Mysteray (713473) | more than 3 years ago | (#36505836)

For the record, Microsoft pushed out (via Windows Update) a patch fully implementing the fix for this well before many other vendors (including some popular Linux distros) did, even though their server (IIS) wasn't nearly as vulnerable in its default configuration as Apache+OpenSSL.

Re:Not as surprising as it should be (1)

Hyppy (74366) | more than 3 years ago | (#36505928)

You have obviously not come across the special breed of divalopers that we like to call Updatus Avoidus. Above and beyond the lovable characteristics of your run-of-the-mill divaloper, the Updatus Avoidus can be identified by it's shrill cries that often sound like "Don't patch! *squaaaak* My code will break! *squaaaaak*"

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36507004)

Oh Gawd I actually had colleagues like that! I tried to secretly install an innocent-looking Windows update on the worst offender's box once and it fucked up his screen resolution. HOW IS THAT POSSIBLE WITHOUT MESSING WITH ABSOLUTELY EVERYTHING IN THE SYSTEM.

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36507542)

Considering what little backlash there is to the companies that have poorly-maintained servers, compared to sheer HATRED of the ones that actually exploit the servers (see: any discussion on Lulzsec), how is it surprising that companies wouldn't give a shit about the security of their servers? They can usually get away with an "oops, lol" and reassurance that THIS TIME the problem will be fixed.

God, capitalism is such a joke.

Re:Not as surprising as it should be (1)

arglebargle_xiv (2212710) | more than 3 years ago | (#36509276)

your 17 years old intern (hourly rate = 1 candle bar)

I think that's your problem, we pay our divalopers 3 candles an hour and we never have any problems (or lack of illumination for that matter).

nom nom (1)

ThatsNotPudding (1045640) | more than 3 years ago | (#36510726)

stupid divalopers only eat candle bars.

Re:Not as surprising as it should be (2)

jd (1658) | more than 3 years ago | (#36506282)

Sufficient duct tape should patch the developers just fine.

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36505266)

There isn't a gold standard for security that all organizations can agree on or implement, nor would they. To any business regulatory compliance is regarded as a cost. It just so happens that computers are unique in that you can't see past the interface and there's no way to audit for compliance. Heck, even if we could there'd be no practical way to audit that many lines of code globally. End result, toothless regulation that no-one adheres to, cost-saving at the expense of the customer which we can later write off as an externality. Hence we'll be seeing plaintext passwords, stored CCs and SSNs for a long time to come I think...

Interesting social characteristic; when faced with an option of correcting bad behavior or simply hiding the bad behavior we always seem to choose the latter.

Re:Not as surprising as it should be (1)

jd (1658) | more than 3 years ago | (#36506302)

Shouldn't be hard. All that's needed is a Klokwork@Home project.

Fire and forget... (1)

Kamiza Ikioi (893310) | more than 3 years ago | (#36505574)

I hear that's also how many companies deal with the developpers of unpatched code... fire them, and forget about the code they wrote. I hear NASA even has that problem, often not even having the code or design of outdated systems. I wonder what the ratio of unpatched but fixable to unpatched and unknown is.

Re:Not as surprising as it should be (1)

eigenstates (1364441) | more than 3 years ago | (#36505864)

Doesn't this all eerily fit in with the slashdot story about the hating of IT and all the Anon 'hacking' activity lately? Or should I just keep my mouth shut?

Re:Not as surprising as it should be (1)

Mysteray (713473) | more than 3 years ago | (#36505948)

Yes, the overall security research community has greatly benefited from some of these large password database disclosures. We've learned a lot about password handling practices both on the back-end (unsalted MD5, or bcrypt?) and users (password crackability). In fact, there has been some overlap in the user base of the breached sites that we can start to look at things like how common password re-use is across multiple sites.

Re:Not as surprising as it should be (1)

eigenstates (1364441) | more than 3 years ago | (#36506420)

My point is that if you walked in to Mr Pointyheads office and started rattling things off like MD5, plain text or ROT-13- eyes would just roll back and he would grumble how much he hates IT and that it should just be stuffed in to the Cloud because the Cloud just works and is safe and makes toast and answers help desk support calls and installs the latest patches and and and... All while saving "8 billion dollars" of salary of those horrible IT people who just sit there an complain all the time and put up resistance to his brillian company saving 1 week implementation schedule listed on the micro-transaction slide in his power point presentation.

Or to say it another way, the patch for the web server and code that we read about here only appears to him as down time and dollars spent on surly cave dwellers- an expensive line item for the unwashed that he will be forced to comprehend.

And this is the same guy who will have moved on to some other department when the DB containing all the credit cards and passwords ends up on a tweet and the fingers get pointed at the surly cave dwllers who have been telling Pointyhead to prioritize the patch over his ill thought through bullet points.

Sorry. These stories really touch a nerve with me.

Re:Not as surprising as it should be (1)

turbidostato (878842) | more than 3 years ago | (#36507104)

"And this is the same guy who will have moved on to some other department when the DB containing all the credit cards and passwords ends up on a tweet and the fingers get pointed at the surly cave dwllers who have been telling Pointyhead to prioritize the patch over his ill thought through bullet points."

Not even that. It might even happen that the PHB is still there but then, it has been the turrists and those bad-smelling freaks from IT, and after all, an unavoidable stroke of bad luck, not his guilt. On the other hand, shaving a 0.25% out of benefits because that nonsense of security from the bad-smelling guys from IT two quarters in a row, *that* is *rrrreal* loses that *will* mean get him fired.

Now, think it again. If you were the PHB in question, what would you choose? The option that will surely fire you or the one that may not happen and you'll probably be able to deflect away if it ends happening?

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36506836)

I'm sure Florian Mueller will be along shortly....

Re:Not as surprising as it should be (2)

Gaygirlie (1657131) | more than 3 years ago | (#36506114)

I'm somehow not at all surpised to see Adobe, Microsoft, Apple and HP servers marked as BAD on that list. What is surprising however is the low number of sites marked as GOOD. Don't admins follow IT security news, are they only given 2 minutes a year to restart the server and/or services, or are they just incompetent?

Re:Not as surprising as it should be (1)

turbidostato (878842) | more than 3 years ago | (#36507142)

"or are they just incompetent?"

Usually admins, and the bigger the company, the truer this is, don't admin but operate. You'll have to look higher in the food chain to find the culprit.

Re:Not as surprising as it should be (1)

L-four (2071120) | more than 3 years ago | (#36525574)

Also lot of the time your unable to "restart the server" because it it's always in use and if you where to restart it even with notice you would get like 50 phones calls saying the server has gone down. so it's easier and less time consuming to just let the severs keep there 600+ day uptime.

Re:Not as surprising as it should be (0)

Anonymous Coward | more than 3 years ago | (#36506176)

I personally think the world of security is still stuck in "fire and forget" mode. Until we, as a community, accept that fire and forget does not provide security, we will be at the mercy of the attack.

A warning against banner-grabbing vuln scans (0)

Anonymous Coward | more than 3 years ago | (#36505154)

Just because my version of Apache & mod_ssl is not the most up-to-date version number does not mean that I do not have a patch in place. Redhat and variant systems that employ security patching backporting suffer from false positives. All. The. Time.

Since the list is currently slashdotted, I am unable to determine if these sites are suffering from an ACTUAL failure to patch or a potential false positive.

The lesson here is to be prudent in both your patching and your reporting.

Re:A warning against banner-grabbing vuln scans (1)

berashith (222128) | more than 3 years ago | (#36505294)

and you can also just turn the banners off to avoid the false positives. This does not mean that you are not vulnerable.

Re:A warning against banner-grabbing vuln scans (3, Interesting)

dgatwood (11270) | more than 3 years ago | (#36507002)

According to the page in question, the test methodology is:

1. Connect with RFC 5746. Upon success, mark as "Good".

2. If the connection fails, the site is not patched. Send a request "HEAD / HTTP/1.1" without using RFC 5746, and save the response for later comparison.

3. Run the actual test. This basically does the following:

  • Connect.
  • Send part of the request.
  • Renegotiate.
  • Send the rest of the request
  • Check to see if the status code matches the status code from step 2.

If a site has been updated with a partial hotfix, it should either disconnect or sent an HTTP status code in the 400s. These sites are flagged as "Uncertain" because the server might initiate a renegotiation. If the site sends data after the second handshake and the response code is the same as the response code from step #2, it is marked as BAD.

Discuss.

Could this be why StartSSL is down? (1)

zero0ne (1309517) | more than 3 years ago | (#36505200)

Is this why StartSSL is down as seen here? [startssl.com]

I am wondering if this is why one of my sites is now showing the "untrusted site" screen in firefox?

Error code:
blah.com uses an invalid security certificate.
The certificate is not trusted because no issuer chain was provided.
(Error code: sec_error_unknown_issuer)

Re:Could this be why StartSSL is down? (1)

zero0ne (1309517) | more than 3 years ago | (#36505348)

Note: in Chrome I get the green https, so I am confident that it is installed correctly.
(and it was fine in firefox a few weeks ago)

Re:Could this be why StartSSL is down? (1)

PetiePooo (606423) | more than 3 years ago | (#36505364)

Not likely, and no.

StartSSL says they're down due to a security breach, not an inability to patch their servers. It's possible the attackers used the vulnerability mentioned here to breach the site, but that's a stretch. It could have been one of many other vulnerabilities.

And the untrusted site error you're receiving is due to a certificate problem at the site you're visiting. Blah.com's certificate is possibly self-signed or the issuer's certificate was revoked. Or their certificate is simply created wrong. In any case, the site certificate is not maintained and tested properly.

What this thread is discussing is an underlying protocol vulnerability. You're not generally going to be able to correlate things like that to specific website errors.

Re:Could this be why StartSSL is down? (1)

heypete (60671) | more than 3 years ago | (#36506254)

Is your server configured properly to send both the server certificate *and* the intermediate certificate?

Some browsers are more tolerant of such misconfigurations, and may be able to acquire the appropriate intermediate through a separate channel (e.g. Chrome and IE on Windows can often get certs from Microsoft Update in the background), while others are less tolerant.

Re:Could this be why StartSSL is down? (1)

Nursie (632944) | more than 3 years ago | (#36508930)

They've been hacked. Not through this (probably).

It's another example of the broken PKI that we have on the web. There aren't many real bugs in the SSL/TLS protocol family any more (clearly the one in TFA should be patched in more places) but the infrastructure of "trusted" authorities used by the web is b0rked, IMHO.

Too many authorities, too many unknowns.

Unexploitable vuln? (1)

Tubal-Cain (1289912) | more than 3 years ago | (#36505242)

In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable.

Isn't a vulnerability, by definition, exploitable?

Re:Unexploitable vuln? (1)

roothog (635998) | more than 3 years ago | (#36505374)

In November 2009 a Man-In-the-Middle vulnerability for SSL/TLS/https was made public (CVE-2009-3555), and shortly afterwards demonstrated to be exploitable.

Isn't a vulnerability, by definition, exploitable?

"Demonstrated to be exploitable" means "actually wrote the exploit, and it worked". It's a step beyond simply thinking about it hypothetically.

Re:Unexploitable vuln? (1)

Mysteray (713473) | more than 3 years ago | (#36505672)

I'd published packet captures of the exploit in action as part of the initial disclosure. Someone else had working exploit code posted to [Full-Disclosure] within hours.

Re:Unexploitable vuln? (3, Interesting)

Calos (2281322) | more than 3 years ago | (#36505428)

Interesting question. I guess you could argue that a theoretical shortcoming isn't a vulnerability if there's no practical exploit.

But that ignores the temporal part of it. It is only not a vulnerability, because it's not practically exploitable right now. Things change, technology changes, new avenues for attacking the shortcoming open up.

It's like the recent proven exploit we saw a few days ago on a quantum message transfer. The method had been theorized, but never been shown. Now that it's been shown, it can be taken more seriously.

Re:Unexploitable vuln? (1)

OverlordQ (264228) | more than 3 years ago | (#36506066)

It is only not a vulnerability, because it's not practically exploitable right now

Yea it is, a guy already did a PoC with Twitter.

Re:Unexploitable vuln? (1)

Calos (2281322) | more than 3 years ago | (#36506958)

I was speaking hypothetically. This, as you say, is proven.

Re:Unexploitable vuln? (1)

Mysteray (713473) | more than 3 years ago | (#36505438)

The blind plaintext injection capability that an exploit gives to the attacker was uncommon at the time and the initial reaction among experts was that it looked a lot like a CSRF attack. Most important sites had built in some protections against that.

It wasn't until a few days later when it was demonstrated against a social networking site (Twitter) that the problem was declared "real" (by Slashdot).

So it's a complex exploit and it did take a few days for a consensus to emerge about the actual severity.

Re:Unexploitable vuln? (2)

compro01 (777531) | more than 3 years ago | (#36505448)

Depends on your definition of "vulnerability". For example, there's a vulnerability in AES's key schedule that weakens the 256 and 192 bit versions down to roughly 99.5 bits security. However, this is not an exploitable vulnerability, as the stars will still have all gone cold and dark by that time.

Re:Unexploitable vuln? (1)

Mysteray (713473) | more than 3 years ago | (#36505762)

It may be that 2^100 computation will never be practical for any plausible attacker, but it's not the truly cosmic level of work you make it out to be.

Re:Unexploitable vuln? (1)

asdf7890 (1518587) | more than 3 years ago | (#36505474)

Before an exploit is demonstrated, a vulnerability is only theoretically exploitable.

Re:Unexploitable vuln? (1)

Mysteray (713473) | more than 3 years ago | (#36505784)

Perhaps, but who's going to pay for the development of the first exploit? The attacker or the defender?

Re:Unexploitable vuln? (1)

Spad (470073) | more than 3 years ago | (#36505484)

Exploitable in theory is not the same as exploitable against a "live" target. It's still a vulnerability either way.

Mirror (1)

ChienAndalu (1293930) | more than 3 years ago | (#36505300)

http://pastebin.com/uRsvDd82 [pastebin.com]

I still have the html. If anyone has an idea where to host it let me know.

Windows Server MS10-049 & KB980436 (4, Informative)

Anonymous Coward | more than 3 years ago | (#36505406)

After reading this article I ran a quick audit on all of our server farms and noticed that KB980436 was dutifully installed Sept 2010...however, upon closer scrutiny I noticed that this Security Patch from Microsoft doesn't prevent this vulnerability by default (but rather keeps it in "Compatibility Mode" by default). Windows SysAdmins need to take care to read MS10-049 and add the appropriate RegKeys to enforce "Strict Mode" to keep their servers from being vulnerable to this exploit. FYI, downloading and installing KB980436 is not enough.

Re:Windows Server MS10-049 & KB980436 (0)

Anonymous Coward | more than 3 years ago | (#36506932)

Yeah, if you want to deny service to all of your unpatched customers, go ahead and do the strict mode. By default, you are protected, while still interoperable. If the clients are patched, then there is no exploiting the vulnerability.

Re:Windows Server MS10-049 & KB980436 (1)

Xacid (560407) | more than 3 years ago | (#36508762)

That's what FAQs are for. ;)

Is there a better explanation of the fix? (1)

brennanw (5761) | more than 3 years ago | (#36505498)

I tried reading that document and glazed over. Is there a site that gives you some practical procedures for making sure your site is secure? Because based on what I've read I only vaguely understand the problem and don't know how to determine if my site has it. I'd prefer not to find out the hard way...

Re:Is there a better explanation of the fix? (2)

Mysteray (713473) | more than 3 years ago | (#36505734)

I mentioned Qualys' SSL Labs nice test utility [ssllabs.com] in another comment.

The fix is to ask your vendor for a patch for CVE-2009-3555 which implements RFC 5746 [ietf.org] Transport Layer Security (TLS) Renegotiation Indication Extension. Responsible vendors will have implemented support for RFC 5746 by now so you may already be patched.

excellent list (0)

Anonymous Coward | more than 3 years ago | (#36505794)

for a hacker!

"lifejournal.com" (0)

Anonymous Coward | more than 3 years ago | (#36507552)

GOOD www.lifejournal.com

Whew. I'm relieved that lifejournal.com is safe... now if only this site would tell us anything about livejournal.com! :P

Re:"lifejournal.com" (1)

kaiengert (209147) | more than 3 years ago | (#36510610)

Thanks for pointing this out. Added.

UNCERTAIN!? (1)

CTU (1844100) | more than 3 years ago | (#36508204)

I was checking a few sites I got to from the list in the article. First one I checked was Amazon and it was "UNCERTAIN". Ebay was listed as bad which was almost as upsetting.

I don't get these company's. How can they not be up to date with security? Do they want a repeat on what happened to Sony to befall them?

Shame we as humans can't learn as we are doomed to repeat it even mistakes of the recent past.

Simple explanation (1)

breser (16790) | more than 3 years ago | (#36508692)

There is a simple explanation why major sites are not supporting RFC 5746. A lot of these sites are probably sitting behind F5 hardware. The SSL is probably implemented just on the F5. F5 hasn't implemented the RFC in any version of their software yet.

http://support.f5.com/kb/en-us/solutions/public/10000/700/sol10737.html

Smaller sites of course are probably just a single http server running Apache. They or their hosting provide update their OpenSSL and Apache httpd versions. So the smaller sites get fixed. Major sites do not.

It'd be interesting to determine how many of these sites on his list are behind F5 hardware. I'm guessing that other load balancing vendors have similar problems, but F5 is the 800 lb gorilla.

The Opera article: http://my.opera.com/securitygroup/blog/2011/05/19/renego-popular-unpatched-and-vulnerable-sites [opera.com] seems to make a mention of this by saying that a major vendor will release an RFC implementation in June. But they don't say who this is and I'm not sure if they're talking about F5 or not.

Re:Simple explanation (1)

DarkFyre (23233) | more than 3 years ago | (#36509098)

Exactly so. Citrix NetScalers have the same issue. Those people claiming this is due to incompetent, stupid or lazy coders or admins have merely never seen the business end of a website big enough to need hardware load balancers with SSL offload.

Re:Simple explanation (1)

breser (16790) | more than 3 years ago | (#36509340)

For some reason your username sounds familiar. Wonder why?

Server software patches... (1)

LongearedBat (1665481) | more than 3 years ago | (#36509338)

I see that both Apple and Microsoft fail the test.

If their own websites fail, does that mean that we cannot expect patches to their server software to fix this?

(My specialties do not include web servers.)

Maybe they just don't need the fix! (1)

Nagilum23 (656991) | more than 3 years ago | (#36509986)

You are only vulnerable if you combine (for example) client certificate authentication and unprotected but SSL secured stuff on the same webserver process. Only then the server needs to do a renegotiation otherwise you don't need to enable renegotiations (it's disabled by default in Apache). So I'm pretty sure very few sites actually need that and are perfectly fine/secure with what they have.

Re:Maybe they just don't need the fix! (1)

kaiengert (209147) | more than 3 years ago | (#36510718)

Maybe! But is "maybe" good enough when dealing with security?

All it takes is an IT person who is unaware that renegotiation was disabled for a reason, and now reenables it because of a new business need: the site is vulnerable again.

I don't like the idea that customers have to trust website operators to run with a hotfix and will never reenable the vulnerability accidentally. It's better to know for sure that a site is fixed. Only patched sites can give us this certainty.

The less unpatched servers there are, the sooner browsers can reasonably warn about unpatched servers, and reject connections to unpatched servers by default.

Re:Maybe they just don't need the fix! (1)

Nagilum23 (656991) | more than 3 years ago | (#36511426)

It's not maybe, it either you need it or you dont! And for the vast majority it's the latter.

it's disabled by default, so it's not a matter of re-enabling it. Besides there is no patch for stupidity.

I could turn that around and say those who blindly always install the latest stuff probably don't know what they really need or do.
New code often means new bugs.

But in this case it only makes sense to warn if the server asks for renegotiation and only accepts an unsecure renegotiation. Browsers can/should warn about that right now.(and maybe they already do)
Of course most SSL connections/servers don't require a renegotiation so for most unpatched servers this will be no problem (as it is security wise).

Re:Maybe they just don't need the fix! (1)

kaiengert (209147) | more than 3 years ago | (#36511906)

If I understand correctly, your proposal is, when a browser connects to an unpatched server, the browser shouldn't worry until the server requests to renegotiate. You propose, it's fine to wait until this happens, and warn in that situation.

Unfortunately your plan doesn't work. The issue is that a browser cannot detect whether a MITM renegotiated with the server!

Even if a server is configured to allow renegotiation, the ordinary browser visitor might never trigger it, and therefore the browser might never see it happen.

A hacker can either (a) use a client/hacker initiated renegotiation if allowed by the server or (b) probe the server using trial-and-error to find an URL that triggers a server initiated renegotiation. (Even if (a) is disabled, you could still be successul with (b), because server software configuration options may allow both to be enabled or disabled independently.)

Now, if the hacker is able to trigger a renegotiation either way, the renegotiation will be limited (!) to the route between MITM and server!

The browser will not notice! The browser will see only the single initial negotation!

If the browser cannot see the renegotiation, then it cannot warn that it's happening. That's the dilemma!

This is why we must aim to get servers upgraded, so that browsers can start to warn about this risk when dealing with unpatched servers.

For the simpler scenario, where no MITM is present, yes, Firefox 4 and later versions will detect if an unpatched server requests a renegotiation, and abort the connection by default. But I hope I explained it well enough to make it clear, that's not sufficient.

Re:Maybe they just don't need the fix! (1)

kaiengert (209147) | more than 3 years ago | (#36512074)

Quoting myself:

This is why we must aim to get servers upgraded, so that browsers can start to warn about this risk when dealing with unpatched servers.

Clarification: The reason, why browsers don't warn yet, is that browsers would have to warn very often. Currently there are too few patched servers. According to numbers provided by Yngve Pettersen [opera.com] , in May 2001, about 45% of all servers in total were still unpatched. A frequent warning would likely have the undesired effect to educate users to ignore the warning.

Re:Maybe they just don't need the fix! (1)

kaiengert (209147) | more than 3 years ago | (#36512084)

Oops. Make that May 2011. Sorry for the typo.

Re:Maybe they just don't need the fix! (1)

Nagilum23 (656991) | more than 3 years ago | (#36512636)

Ah, yes, you're right about that! In that case it does indeed make sense to push for servers to be updated even if they are not directly impacted.
Thanks for laying it out again!

Wow!! (1)

hesaigo999ca (786966) | more than 3 years ago | (#36510970)

I see that Microsoft is in the list, of all the people, you would think the ones pushing out the update could at least use the update themselves...!

HSBC getting fixed thanks to me (0)

Anonymous Coward | more than 3 years ago | (#36548042)

HSBC's site entry went from Bad to Uncertain due to my email after I read this article. All's it took was a well worded email to security and their CEO. :P

I got a call back from my local branch (mentioning the CEO was involved, lol!) and they indicated they'd notify me when it was fixed.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?