×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hackers May Have Nabbed Over 200 SSL Certificates

samzenpus posted more than 2 years ago | from the from-bad-to-worse dept.

Security 141

CWmike writes "Hackers may have obtained more than 200 digital certificates from a Dutch company after breaking into its network, including ones for Mozilla, Yahoo and the Tor project — a considerably higher number than DigiNotar has acknowledged earlier this week when it said 'several dozen' certificates had been acquired by attackers. Among the certificates acquired by the attackers in a mid-July hack of DigiNotar, Van de Looy's source said, were ones valid for mozilla.com, yahoo.com and torproject.org, a system that lets people connect to the Web anonymously. Mozilla confirmed that a certificate for its add-on site had been obtained by the DigiNotar attackers. 'DigiNotar informed us that they issued fraudulent certs for addons.mozilla.org in July, and revoked them within a few days of issue,' Johnathan Nightingale, director of Firefox development, said Wednesday. Looy's number is similar to the tally of certificates that Google has blacklisted in Chrome."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

141 comments

Boring (5, Informative)

Mensa Babe (675349) | more than 2 years ago | (#37270138)

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

It's not "boring". It's an important lesson. (-1)

Anonymous Coward | more than 2 years ago | (#37270208)

For years now, we've heard one fool after another scream about how passwords are bad, but how certificates and public-key cryptography are the solutions to all of our digital security woes.

During these same years, there have been some of us who have pointed out that this is total bullshit. We were repeatedly told that we were wrong, but during these past few weeks we've been completely vindicated.

Re:It's not "boring". It's an important lesson. (1)

logjon (1411219) | more than 2 years ago | (#37270226)

There's nothing wrong with public key cryptography. The issue is with the way it's handled, specifically the CAs.

Re:It's not "boring". It's an important lesson. (0)

Anonymous Coward | more than 2 years ago | (#37270382)

Clearly there is something wrong with public key cryptography, otherwise we wouldn't need the band-aid "solutions" that CAs and the chain of trust concept are.

Re:It's not "boring". It's an important lesson. (-1)

logjon (1411219) | more than 2 years ago | (#37270574)

Yes, and clearly there's a flaw with the internet, otherwise we wouldn't need the SSL/TLS encryption in the first place. Moron.

Re:It's not "boring". It's an important lesson. (0)

Anonymous Coward | more than 2 years ago | (#37270644)

Why are you resorting to name calling? Why resort to ad hominem if you're correct? Oh, that's right, you aren't. You're absolutely wrong, and the GP is the one who is correct. Nice try, though.

I'm sorry that your religion revolving around CAs and PKI has been shattered, but there's no need for you to take that out on other people. The fear and uncertainty you feel at the moment will pass, but only when you admit how wrong you are.

et tu; get your terms right (2)

Onymous Coward (97719) | more than 2 years ago | (#37271740)

"He's resorting to name calling, which means he's wrong" is itself an ad hominem.

Public key crypto is not the same thing as the current browser HTTPS CA trust model. Make the distinction and you'll be better able to understand him.

Re:It's not "boring". It's an important lesson. (3, Insightful)

sjames (1099) | more than 2 years ago | (#37270752)

If you keep a spare house key on your front porch in a metal box marked spare house key you'll be robbed sooner or later. This is not a flaw of the lock and key security.

The public key system is working fine. What is not working so well is the trust model. The current system is fatally flawed in that security depends on none of the many many CAs failing. It doesn't matter if you choose a high quality CA to sign a cert for your site, your users can still be fooled by a backwater CA you've never heard of before and wouldn't trust to guard a dime.

Re:Boring (4, Interesting)

Gerald (9696) | more than 2 years ago | (#37270242)

"If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC." -- Moxie Marlinspike [twitter.com]

Re:Boring (1)

0123456 (636235) | more than 2 years ago | (#37270270)

"If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC."

Is it just me, or does this make no sense to anyone else either?

Re:Boring (4, Informative)

the_enigma_1983 (742079) | more than 2 years ago | (#37270532)

In response to DigiNotar incidences, some people are removing the root CA for DigiNotar from their computers. This way your computer will not trust _anything_ signed by DigiNotar.

With DNSSEC, if the people in charge of your DNS have an incident (hackers, malpractice or otherwise) which changes the "certificate" (for lack of a better word) for your website, you are stuck. There is no "root" certificate that you can remove.

There are always tradeoffs (1, Insightful)

Junta (36770) | more than 2 years ago | (#37270870)

It's true that in DNSSEC, it is potentially a huge logistical nightmare if a party entitled to 'bless' public keys as yours is persistently compromised (basically, an unprecedented display of ongoing incompetence to always be open to hijack).

However, on the flip side, the relatively long lifetime of certificates incurs the nastiness of CRLs and some amount of faith that CRLs make it to the right place at the right time. If a DNSSEC authority is compromised and fixes it in an hour, all signs of the compromise evaporate in less than a day. If a CA is compromised, you could potentially have years of potential threats if a hapless client doesn't get a CRL update. That's the biggest problem with x509 at internet scale, long term risk to existing credentials because of the relatively outdated goal of avoiding frequent communication with an authority.

Re:There are always tradeoffs (1)

Junta (36770) | more than 2 years ago | (#37271016)

I have to retract a bit, OSCP actually does a fair amount to address the issue of revocation, it's just that it isn't universal. Someone will have to explain how DNSSEC would be fundamentally any better than x509 with ubiquitous OSCP.

Re:There are always tradeoffs (2)

Zeinfeld (263942) | more than 2 years ago | (#37271570)

DNSSEC has its place, even for key distribution. But it does not provide a basis for trust because mere holdership of a DNS domain does not mean you are trustworthy.

The big win for DNSSEC is to distribute security policy in a scalable fashion. See my CAA and ESRV Internet drafts.

Imagine that you are visiting slashdot, wouldn't it be better to use SSL than en-clair if the site supports it? Wouldn't it be better to have encryption with a duff cert than no encryption at all? [*]

DNSSEC allows a site to put a flag in its DNS to say 'always use SSL when visiting slashdot on http'. Now the server knows that if it is going to slashdot and it is not encrypted there is a man in the middle. Same for Twitter, Google etd.

DNSSEC can also be used to ensure that the only certs trusted for a domain are ones authorized by the domain holder. This provides an independent trust path to CA issued X.509. If used in combination, security can be improved.

[*] The catch is that showing the user the padlock icon for a duff cert is going to make them less secure. That is why I would like to see the browsers remove the padlock icon completely for DV certs. the only reason the padlock is required is to allow the user to check that SSL is in use. Since the user can't and won't do that reliably it is a poor control anyway. But it is in any case a control that should be enforced by the browser not the user and DNSSEC security policy allows that to happen.

On key distribution, well sure, for typical Web services and for promiscuous security, DNSSEC validated keys are just fine. It is not going to be a money saver. It does not justify a padlock icon (neither does a DV cert). But it is perfectly adequate for most applications.

Unfortunately it is likely that making use of DNSSEC for key distribution is going to be delayed for at least a year due to IETF politics. I blame the people behind the DANE proposal. They have been less than forthcoming about their real agenda from the start and have shown absolutely no willingness to accept any input from other parts of the IETF. The IETF is a consensus based organization but the test is IETF consensus, not working group consensus. If a clique wants to change the rules for handling PKIX certs they have to get an IETF consensus that this should be done.

DANE could have easily been designed in a way that allowed security policy and key distribution to be completely separate. Unfortunately the ruling clique insists these be joined. The result is a spec that is in my opinion undeployable because the transition strategy for a scheme providing positive trust (key distribution) is by necessity very different to that required for a scheme that provides negative trust (key revocation, security policy, etc.).

Re:There are always tradeoffs (1)

ArsenneLupin (766289) | more than 2 years ago | (#37272960)

But it does not provide a basis for trust because mere holdership of a DNS domain does not mean you are trustworthy.

It does not provide a cure for cancer either...

See my CAA and ESRV Internet drafts.

Maybe you should try submitting a paper to the Lancet too. With a little bit of luck you might catch a peer-reviewer off guard amazed to notice that cancer starts with CA.

No seriously, the "trust" in "trusted third party" has nothing to do with the trust that you put in the second party (i.e. the server or business with which you are communicating). It has all about to do with the trust you put in the third party (the certification agency), that it correctly does its job (only giving certificates to properly identified entities and appropriately securing their infrastructure so that hackers and spies can't just "help themselves"). The threat that SSL certificates are supposed to protect against is wiretapping, not rogue businesses. I'm sure, all of those shady banks that failed in the 2008/2009 crisis had valid SSL certificates, and rightly so!

Unfortunately marketers have latched on to the catchy word "trust", completely muddying any understanding about whom your supposed to trust. A certificate is not a badge of honor any more than a passport is. Both are just pieces of identification.

Re:There are always tradeoffs (1)

makomk (752139) | more than 2 years ago | (#37273228)

Anyone that can launch a man-in-the-middle attack can block OSCP verification requests, and for non-EV certificates they can do so in a way that causes all browsers to accept the certificate as valid with no kind of warning whatsoever.

Re:There are always tradeoffs (1)

Junta (36770) | more than 2 years ago | (#37273712)

But from what I've read, I'd consider that a failing of how OSCP is *implemented*, not how it is architected. First pass was returning 'tryLater' which looked innocuous enough and lo and behold it was treated as innocuous (I would think that should count as a validation error if implemented properly). Second time around it was shown that most browsers would even treat a 500 error as 'close enough'. In all these cases, the problem is not that OSCP is incapable, it's that the browsers erred on the side of convenience and made 'no news is good news' the policy. I'd anticipate DNSSEC errors to be treated similarly because its the mindset of the client developers and not the core technology that is the issue.

Re:Boring (1)

Karl Cocknozzle (514413) | more than 2 years ago | (#37270984)

No idea what he's talking about... a cursory Google search [google.com] reveals that provision has been made to revoke certificates, so presumably he's making some larger point about something else. ...Damned if I know what that is, though. But I do follow the Convergence project and am testing out the browser plug-in... If Moxie reads Slashdot and sees this: Would you care to expound on the quoted Tweet?

Re:Boring (3, Insightful)

Zeinfeld (263942) | more than 2 years ago | (#37271470)

Oh I know what he is trying but he has no clue what the threat model is.

The threat model in this case is a well funded state actor that might well be facing a full on revolution within the next 12 months. It does not matter how convergence might perform, there is not going to be time to deploy it before we need to reinforce the CA system. [Yes I work for a CA]

I think it most likely we will be seeing the Arab Spring spreading to Syria with the fall of Gaddafi. We are certainly going to be seeing a major ratcheting up of repressive measures in Syria and Iran. Iran knows that if Syria falls their regime will be the next to come under pressure. In many ways the Iranian regime is less stable than some that have already fallen. There are multiple power centers in the system. One of the ways the system can collapse is the Polish model, the people of Poland didn't have a revolution, they just voted the Communist party out of existence. If the Iranian regime ever allows a fair vote the same wil happen there.

Anyone think that we will have DNSSEC deployed on a widespread scale in the next 12 months? I don't and I am one of the biggest supporters of DNSSEC in the industry. DNSSEC is going to be the biggest new commercial opportunity for CAs since EV. Running DNSSEC is not trivial, running it badly has bad consequences, the cost of outsourced management of DNSSEC is going to be much less than a DNS training course ($1000/day plus travel) but rather more than a DV SSL certificate ($6 for the cheapest).

The other issue I see with Convergence is that it falls into the category of 'security schemes that works if we can trust everyone in a peer to peer network'.

Wikipedia manages a fair degree of accuracy, but does anyone think that they really get up to 99% accurate? Until this year the CA system had had three major breaches, all of which were trapped and closed really quickly plus about the same number of probes by security researchers kicking the tires. Until the Diginotar incident anyone who had revocation checking in place was 100% safe as far as we are aware, not a bad record really.

There is a population of about 1 million certs out there, even 200 would mean 99.95% accuracy.

Running a CA is really boring work. Not something I would actually do personally. To check someone's business credentials etc takes some time and effort. It is definitely the sort of thing that you want a completer-finisher type to be doing. Definitely not someone like me and for 95% of slashdot readers, probably not someone like you either.

The weak point in the SSL system is not the validation of certs by CAs, they are (in order) (1) the fact that SSL is optional (2) the fact that the user is left to check for use of SSL (3) the fact that low assurance certificates that have a minimal degree of validation result in the padlock display.

The weak point being exploited by Iran is the braindead fact that the Web requires users to provide their passwords to the Web site every time they log in. I proposed a mechanism in 1993 that does not require a CA at all and avoids that. Had RSA been unencumbered I would have adopted an approach similar to EKE that was stronger than DIGEST but again did not require a cert.

Certs are designed to allow users to decide who they can share their credit card numbers with. That is a LOW degree of risk because the transaction is insured. Certs are not intended to tell people it is safe to share their password with a site because it is NEVER safe to do that.

makes sense to me (1)

Onymous Coward (97719) | more than 2 years ago | (#37271600)

With HTTPS, the people you trust are the few hundreds of CAs your browser is configured to trust. It's way too many, and your vulnerability with them is a logical OR -- any CA fails and you are vulnerable. It's a fucked up system. However, at least you can remove DigiNotar from your browser's trusted list.

With DNSSEC, you trust the root. They are your "trust anchor". And you get no choice about it.

Each system is fucked up.

This relates to the concept of "trust agility" that Marlinspike discussed. He wrote it up in a blog entry. I highly recommend reading it and understanding it. You can get to the blog by the first link in this Slashdot article a couple weeks ago [slashdot.org].

maybe this will help you make sense of it (3, Interesting)

Onymous Coward (97719) | more than 2 years ago | (#37271648)

SSL And The Future Of Authenticity, Moxie Marlinspike [thoughtcrime.org]:

Worse, far from providing increased trust agility, DNSSEC-based systems actually provide reduced trust agility. As unrealistic as it might be, I or a browser vendor do at least have the option of removing VeriSign from the trusted CA database, even if it would break authenticity with some large percentage of sites. With DNSSEC, there is no action that I or a browser vendor could take which would change the fact that VeriSign controls the .com TLD.

If we sign up to trust these people, we're expecting them to willfully behave forever, without any incentives at all to keep them from misbehaving. The closer you look at this process, the more reminiscent it becomes. Sites create certificates, those certificates are signed by some marginal third party, and then clients have to accept those signatures without ever having the option to choose or revise who we trust. Sound familiar?

The browser CA model is screwed up. DNSSEC is screwed up. What's the answer?

I think Marlinspike was smart to start with defining the problem. And now, with Convergence, he's also trying to address it. Check it out. (And check out Perspectives. Perspectives is the project he based Convergence on.)

Re:Boring (2)

dgatwood (11270) | more than 2 years ago | (#37271142)

"If you think it's nice that you can remove the DigiNotar CA, imagine a world where you couldn't, and they knew you couldn't. That's DNSSEC." -- Moxie Marlinspike

That's a fundamental mischaracterization of DNSSEC. You can't realistically remove individual DNS registrars now, but they all feed into registries, and you generally either trust those registries or you don't. If you don't, then you don't go to those TLDs. More to the point, this argument incorrectly tries to model the security of all websites at the same time, whereas the user only cares about the security of a single website—the one he or she is trying to access.

With DNSSEC, you as the domain owner are in control. No one can take control over your domain in one place without fully taking control over your domain worldwide. You therefore choose your registrar based on having good security. As long as the registrars do not screw up and accidentally turn over control of a domain to someone else, you are safe. However, you are provably no less safe than you are now even if they do screw up in that way; most domain certificate issuance is now validated based solely on whether you have control over the domain name. Thus, if you take over the account for the domain name, you can get a cert from any major CA. Therefore, this attack vector is unaffected by DNSSEC.

What DNSSEC provides is a reduction in the attack surface. With DNSSEC, the trust that people place in a domain is solely trust in the owner of that domain and in the services (registrars) that the owner of that domain trusts. By contrast, with the current system, anyone can hijack a domain on a local network by forging DNS replies. If they can trick any registrar into issuing a certificate, they can then masquerade as that domain. Thus, because the certificates are not tied to the domain name system, by trusting a domain under the current system, you are trusting not only the domain owner and the providers that the domain owner trusts, but every other CA out there. So instead of someone having to trick a single registrar to compromise a domain, they could trick any of dozens of CAs. It only takes one.

Worse, with many (possibly all) browsers, the trust model is completely broken. If you trust a certificate, you trust a certificate. If that certificate has signing authority, you now trust every certificate that it signs. This means that all a website needs to do is trick you into accepting a self-signed certificate for some innocuous site, and from that point on, it can use that cert to sign forged certs for any other site. So in order to trust any single website, you not only have to trust every CA that your browser supports, but also every self-signed cert that you have ever accepted.

With DNSSEC, you need only trust the server, its registrar, and the relevant root registries. And even in the worst case, if a registrar started signing fake DNS entries, you are still better off than before because DNS signatures are only valid for a few minutes instead of a few years like SSL certs, and odds are such a problem would eventually be noticed, the registrar would fix the security hole that allowed this, and a few minutes later, any bogus records would cease to validate.

Thus, moving to DNSSEC dramatically narrows the amount of trust you are giving out when you access a domain name. This is inarguably a good thing according to any reasonable security analysis.

Re:Boring (3, Interesting)

Zeinfeld (263942) | more than 2 years ago | (#37271642)

Unfortunately the registrar system is rather less trustworthy than you imagine. We have not to date encountered an outright criminal CA. We do however know of several ICANN registrars that are run by criminal gangs.

The back end security model of the DNS system is not at all good. While in theory a domain can be 'locked' there is no document that explains how locking is achieved at the various registry back ends. A domain that is not locked or one that is fraudulently unlocked is easily compromised.

The part of the CA system that has been the target of recent attacks is the reseller networks and smaller CAs. These are exactly the same sort of company that runs a registrar. In fact many registrars are turning to CAs to run their DNSSEC infrastructure since the smaller ones do not have the technical ability to do it in house. In fact a typical registrar is a pure marketing organization with all technical functions outsourced.

There are today about 20 active CAs and another 100 or so affiliates with separate brands. In contrast there are over a thousand ICANN registrars.

Sure there are some advantages to incorporating DNSSEC into the security model. But to improve security it should be an additional check, not a replacement. Today DNSSEC is an untried infrastructure, it is grafted on to a legacy infrastructure that is very old and complex and security is an afterthought.

The current breach is not even an SSL validation failure. The attacker obtained the certificate by bypassing the SSL validation system entirely and applying for an S/MIME certificate that did not have an EKU (which it should). That makes it a technical exploit rather than a validation issue. DNSSEC is a new code base and a very complicated one. Anyone who tells you that it is not going to have similar technical issues is a snake oilsman.

Re:Boring (1)

dgatwood (11270) | more than 2 years ago | (#37272764)

We do however know of several ICANN registrars that are run by criminal gangs.

Ultimately, it doesn't matter. Bugs notwithstanding, DNSSEC is still provably no less secure than CA-based certs because if you can compromise DNSSEC, you can also change the contact info on a domain and get any CA to give you a cert for the domain. Therefore, even if every CA were above board, you still cannot trust the CAs (even the best CAs) to protect you from someone compromising the domain itself.

Therefore, your domain, by definition, cannot be more secure than your domain name registrar, no matter what the CAs do. No amount of special extended validation certs or any such silliness will help that because the user statistically won't notice when the cert stops being an EV cert (and half the users won't notice even if the cert becomes self-signed...). So either you trust that your registrar won't let go of your domain to some shady registrar or you don't, and if you don't, then it's game over, CA or not. Therefore, they provide no additional security; they're like an appendix or some other vestigial organ in that (almost by definition) they can only make security worse, not better.

Re:Boring (1)

unencode200x (914144) | more than 2 years ago | (#37272178)

If they can trick any registrar into issuing a certificate, they can then masquerade as that domain

It's worth watching Moxie's talk on defeating SSL. He demonstrates just how easy it is to get a certificate for any domain you want. He also shows just how broken it is and how (in most cases) revocation is a joke. It's a little outdated now, but still relevant and well worth watching imho: DEFCON 17: More Tricks For Defeating SSL [youtube.com]

During his BlackHat 2011 talk BlackHat USA 2011: SSL And The Future Of Authenticity [youtube.com] he discusses how SSL was born (it's funny and sad) and proposes using Convergence (which can work along with existing CAs) to help shore up security.

What makes me happy is that as a community we all seem to be much more aware of these issues, hopefully we'll be able to move forward on making the Internet more secure and trustworthy for everyone.

Re:Boring (1)

Morty (32057) | more than 2 years ago | (#37271766)

Both the current CA model and Moxie Marlinspike's proposed notary system already implicitly trust DNS registration data. When someone requests example.com, how does the CA (or notary) know that the requestor owns it? In a few rare cases, the CA (or notary) knows the requestor personnally, but that's rare, and doesn't scale to the Internet. In the normal case, the CA (or notary) has no information other than DNS. The CA (or notary) will either check that the requestor's contact data matches the DNS whois data (implicitly trusting the current DNS/whois data) or will instruct the requestor to post a file to their site (implicitly trusting the current DNS records.)

In either case, DNS is trusted implicitly.

Subtlety: note that this is not saying that DNS data is trustworthy. DNS data is definitely not trustworthy. Rather, it's saying that any entity looking to validate a DNS domain needs to rely on DNS data, so there cannot be any entity more trustworthy than DNS.

As an example, suppose a site's DNS registrar was Joe the used car salesman. You don't trust Joe. Your buddy happens to be both the Pope and a Moxie-style notary, so you figure you'll get the Pope to check out the site's SSL. What is the Pope going to do? The Pope doesn't know the site personally. The only information available about the site is in the DNS registry database. So the Pope is going to check the site's DNS database entry -- written by Joe the used car salesman -- contact the site, verify that they do indeed match what Joe wrote in whois, and issue a signature. You now have the Pope's guarantee that the site matches what Joe said it matches. The trouble is that the Pope is just saying "yes, it matches what Joe said" -- you have gotten no more of a guarantee than if you just gotten Joe to tell you that to begin with.

That's why DNSSEC, if it actually could be deployed, would be the best system for traditional SSL certs. Traditional SSL certs are statements about DNS data. There cannot be anyone more authoritative on what the DNS contains than the registrars that generate that data.

Of course, EV certs are another story, in that they make a statement about something different than just the domain name. However, that's another story.

Re:Boring (1)

Morty (32057) | more than 2 years ago | (#37271938)

. . . and I appear to have misunderstood Moxie's system. It does not implicitly trust DNS at all. It does rely on SSL certs not to change, which I find odd, given that SSL certs tend to be replaced (either shortly before expiration or after a private key compromise.)

Re:Boring (1)

TheLink (130905) | more than 2 years ago | (#37272604)

It does rely on SSL certs not to change, which I find odd, given that SSL certs tend to be replaced

Cert expiration is little to do with security. The main reason why SSL certs expire is so that CAs can make money (that many think they don't deserve to make ;) ).

IMO having to issue and reinstall certs regularly causes more security problems.

If a hacker can get hold of a webserver's SSL private keys, the hacker can likely get whatever else that webserver has or can access. Changing the SSL cert regularly won't help.

Most ssh servers never have their keys changed. If one day they change, it usually means something significant has happened.

In contrast many websites due to a combination of cert expiration and CDN services end up having multiple certs. If a webservice has 2 or more different SSL certs and every year or so keeps changing them[1], how is a user going to know whether the certificates really belong to that webservice? Because some random CA in the browser says so? And how does the user know that the webservice has decided which CA to use?

So self-signed certs can actually be safer. What are the odds that the first time you login to your bank someone is doing a MITM attack on you? If you survive that window, if the cert ever changes it means something has gone wrong. If you're paranoid for your "first time" you could try making connections to the bank on different days and from different places/ISPs (or even via VPN services/Tor) and then checking the fingerprints, or even asking the bank about it.

[1] FWIW I use Certificate Patrol on Firefox and that's why I know that some services have multiple CAs for their certs and rotate them. Whether this is correct or not, I have no way of telling. So in my opinion the CA system doesn't really increase security.

Re:Boring (1)

Morty (32057) | more than 2 years ago | (#37272682)

Any new SSL cert validation scheme needs to interoperate with the CA-based SSL cert validation scheme. The existing SSL cert validation scheme does have cert expiration, needed or not. Your bank is not going to switch to a self-signed perpetual cert when the overwhelming majority of its customers are relying on CA-based schemes that will claim the bank's site is unsafe. So certs are going to keep changing. For a new cert validation scheme to succeed, it must be able to accommodate this during the transition.

Meanwhile, DNSSEC-based cert validation can interoperate with the current CA system without an interoperability problem during transition. And when the transition is over, you no longer need to pay a CA.

And I would argue that certs -- or more correctly, private keys -- should periodically expire. People occasionally change jobs. Backup media get misplaced. As keys age, they are more likely to have been compromised, or to be based on unacceptable legacy algorithms or key lengths. Changing encryption keys periodically is a general best practice in the IT industry. The lack of a built-in key aging and key distribution method in ssh is, IMHO, its biggest weakness.

Re:Boring (0)

Anonymous Coward | more than 2 years ago | (#37270262)

DNSSEC solves some of the problems, but not all of them. If a major change in SSL is going to be made it should be better than just 'good enough for now'. We need something de-centralized so that it can't be censored or trivially hijacked.

Re:Boring (2, Interesting)

Anonymous Coward | more than 2 years ago | (#37270334)

Those are Google's nameservers.

As long as we're distrusting authority you might want to mention that.

Using DNS provided by an advertising firm isn't exactly the healthiest thing for your privacy, maybe not now, but when those become the new 4.2.2.[1-3] and Google can monetize them.

Anyone who cares about his privacy should never rely on a Google product.

Re:Boring (1)

Anonymous Coward | more than 2 years ago | (#37270358)

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

Uh, right, because cryptographic operations are free and don't represent a DNS DOS opportunity, right? Oh wait...

Re:Boring (1)

divisionbyzero (300681) | more than 2 years ago | (#37270592)

All of the news about the SSL security flaws are starting to get boring. We had a related scandal just yesterday [slashdot.org]. The problem with SSL (or TLS, actually) is that it uses X.509 with all of its problems, like the mixed scope of certification authorities. It's like using global variables in your program - it is never a good idea. I can only agree with Bruce Schneier, Dan Kaminsky and virtually all of the competent security experts that we have to completely abandon the inherently flawed security model of X.509 certificates and finally fully embrace the DNSSEC as specified by the IETF. It is both stupid and irresponsible to have a trust system used to verify domain names in 2011 that is completely DNS-agnostic - and in fact designed in the 1980s when people were still manually sending the etc/hosts files around! There could be a lot of better solutions than the good old X.509 but in reality the only reasonable direction that we can choose today is to use the Domain Name System Security Extensions. Use 8.8.8.8 and 8.8.4.4 exclusively as your recursive resolvers. Configure your servers and clients. Define and use the RRSIG, DNSKEY, DS, NSEC, NSEC3 and NSEC3PARAM records in all of your zones. Use and verify them on every resolution. Educate people to do the same. This problem will not solve itself. We have to start acting.

Uh, right, because cryptographic operations are free and don't represent a DNS DOS opportunity, right? Oh wait...

What he said.

Re:Boring (0)

Anonymous Coward | more than 2 years ago | (#37270406)

There are certain situations where things like global variables are fine. Same thing for "goto" and similar "taboo" language features.

Only an inexperienced or non-programmer would think any different.

You're just parroting what you heard from somebody that heard from somebody that heard from some moron that this is the way you do things. Pure myth.

Convergence (1)

Artemis3 (85734) | more than 2 years ago | (#37272572)

Screw that, i moved to Convergence [convergence.io].

Re:Convergence (0)

Anonymous Coward | more than 2 years ago | (#37273092)

We need people to port this to other browsers.
I have not worked with browser plugins but I took a quick look and it looks like it's all .js
I'll look into it what needs to be done to port this to Opera because I like is system but not the browser it currently requires.

Re:Boring (1)

arkhan_jg (618674) | more than 2 years ago | (#37272594)

While a complete re-work of the certificate signers is a good idea - and implementing DNSSEC widely is also a good idea - we're still going to need TLS, and that means certificates. DNSSEC doesn't provide any mechanism for encrypting the data stream after you've securely established you're talking to the right server, nor should it, that's not its job.

So DNSSEC protects against DNS poisoning and some MITM attacks; but there are plenty of other ways such as fake gateway, passive listening of wifi traffic, ARP spoofing etc etc so you're still going to want TLS, or some other form of encrypted data transfer for email and http etc. OK, IPSEC implementations may be a lot more widespread with IPv6, but that's a long way off even in a best case scenario.

I think the best we can hope for, for now, is DNSSEC + TLS. TLS secures your data stream, DNSSEC ensures you've not been DNS spoofed. 'Officially' signed certs for TLS is a belts-and-braces approach. With that, someone has to spoof the certificate or get a fake issued AND find another way of watching your traffic without DNS poisoning - which is harder to do on a bulk scale if you're not the network provider/ISP itself. Self-signed certs are too easy to spoof. Yes, it's not a perfect system, but security is a layered approach, not a one-trick pony.

Diginotard (2)

utkonos (2104836) | more than 2 years ago | (#37270142)

So, I still say that if trust is lost once, nothing that Diginotard touches can ever be trusted.

Re:Diginotard (1)

Haedrian (1676506) | more than 2 years ago | (#37270256)

Except that most people don't know anything about certificates, and don't know why they should care.

And adding/removing certificate authorities isn't an easy task you'd give to anyone.

So unless the higher-ups (site owners / browser vendors) kill this company, there's nothing much the rest of us can do.

Re:Diginotard (2)

sjames (1099) | more than 2 years ago | (#37270768)

It's quite easy to do actually, but in this case, the vendors are taking care of it. The update went out on debian-security today. IIRC, mozilla is planning an update as well.

Re:Diginotard (1)

SmurfButcher Bob (313810) | more than 2 years ago | (#37271900)

I fear we may be missing the point. Maybe.

There are indicators that the number is a lot more than just 200 certs - some speculate that there were log wipes involved, which means we can expect a very, very large number.

If that's true, it's wonderful that some browsers are blocking a bogus *.google.com cert. It'll be useless, however, if the attackers generated 50,000 OTHER *.google.com certs, along with multiple certs for world+dog.com.

As to the impact of this CA's incompetence, it's pretty evil when you consider that these people will bury you up to your arms and throw rocks at your face.
1. You use firefox, and have addons.
2. I hijack addons.mozilla.org.
3. You fire up firefox, which dutifully checks for updates.

That CA needs to not exist.

Re:Diginotard (1)

sjames (1099) | more than 2 years ago | (#37272128)

You misunderstand, the updates aren't merely blacklisting a few known bad certs, it is invalidating any cert that ever has or ever will be signed by this CA. Effectively it makes them not exist.

Any legitimate holder of a cert signed by them will need to go get a new one from someone else.

That's it, fuck CAs (4, Insightful)

GameboyRMH (1153867) | more than 2 years ago | (#37270230)

CAs are done, stick a fork in 'em. Just generate your own certs. A CA cert only increases your chance of getting MITM'ed (since you don't have sole control over distribution), and without a big store of certs in one place, they'll be harder to steal.

Fuck CAs, install Convergence / Perspectives, call it a day.

Re:That's it, fuck CAs (3, Informative)

Karl Cocknozzle (514413) | more than 2 years ago | (#37270702)

Couldn't agree more. Links for the lazy: Convergence [convergence.io] and Perspectives [perspectives-project.org].

Enjoy.

Re:That's it, fuck CAs (1)

GameboyRMH (1153867) | more than 2 years ago | (#37270892)

BTW, after giving Convergence a try, I still prefer Perspectives. Convergence's anonymization feature is nice but it uses a mechanism that installs a local CA, causing CertPatrol to go nuts, and it doesn't offer anywhere near the level of customization of Perspectives.

Re:That's it, fuck CAs (1)

seyyah (986027) | more than 2 years ago | (#37271896)

It's not Convergence. It's "Convergence Beta". And I'm not interested in beta software protecting my security.

Wait, you're saying that they use "Beta" to market their product because it sounds cool? Yeah, not interested in that either.

Re:That's it, fuck CAs (0)

Anonymous Coward | more than 2 years ago | (#37272246)

I really wish they could support additional browsers, like Chrome, Opera, IE, or Safari.

Re:That's it, fuck CAs (1)

DamnStupidElf (649844) | more than 2 years ago | (#37272488)

No one can "steal" your existing certificate unless they also steal your web server's private key. A CA can issue a fraudulent certificate for your site, but anyone can generate a self-signed certificate for your site as well. How does a CA make MITM attacks more likely? How many users visit your web site for the first time on an untrusted wireless network or in a country where the government may want to feed them a fake certificate anyway? Propagation and widespread trust of self-signed certs is what would actually cause a rise in the number of MITM attacks. This story is about known bad certificates that everyone can avoid by removing a single root CA from their browser. I haven't heard of any reported MITM attacks resulting from the bad certificates, although I wouldn't be surprised if some occurred. In a world of self-signed certificates there isn't even a way to begin to detect MITM attacks (much less stop them) unless you watch every connection between every client and web server and keep track of every possible certificate ever generated and its use history. Did your favorite web site just change its self-signed certificate because they lost the private key due to hardware failure, because it expired, or some other legitimate reason? Or is this a MITM attack?

Re:That's it, fuck CAs (1)

roman_mir (125474) | more than 2 years ago | (#37272830)

But when you do say it [slashdot.org], they [slashdot.org] come out of woodworks and promise that you'll be teared to shreds.

We need a way [slashdot.org] to have certificate fingerprints distributed in lists, multiple copies of those with redundancy, because you can't trust a CA [slashdot.org]. How do we know that other CAs are not having the same problems? How do we know CAs are not in on this stuff?

How do we know anything if we allow secrets rather than openness in this matters?

Wait a second... (1)

LittlePud (1356157) | more than 2 years ago | (#37270324)

...wouldn't the certs be useless without the associated private keys?

Re:Wait a second... (5, Informative)

bill_mcgonigle (4333) | more than 2 years ago | (#37270700)

...wouldn't the certs be useless without the associated private keys?

No, the government of Iran generated a key and a CSR for *.google.com, had Diginotard sign them (not sure if this was social or technical hack) and then deployed them inline for a MitM attack on the residents of the area their organization controls.

They have the key and the cert. They didn't get Google's key or cert, they have their own.

I wonder how many dissidents have died because of this sloppy CA and the reliance on the CA system.

Why isn't Iran revoked? (1)

Anonymous Coward | more than 2 years ago | (#37271920)

Seriously. A state has shown itself to intentionally and willfully hack, and yet they are allowed to stay connected? Why haven't they been cut off?

Re:Wait a second... (0)

Anonymous Coward | more than 2 years ago | (#37273162)

> No, the government of Iran generated a key and a CSR for
> *.google.com, had Diginotard sign them (not sure if this was social
> or technical hack) and then deployed them inline for a MitM attack
> on the residents of the area their organization controls.

No, the governments of *.* generated keys and CSRs for *.TLD, had $CA sign them (not sure if this was social or technical hack) and then deployed them inline for a MitM attack on the residents of the area their organization controls.

TFIFY!

And how much software checks for revoked certs? (1)

Anonymous Coward | more than 2 years ago | (#37270330)

Seriously, I wonder what percentage of software actually checks the CRL's. It's extra steps that are annoying to code and I bet a lot of programmers just skipped it.

So even though these certs have or will be revoked that doesn't mean you're safe. If the programmer(s) of the software you're using were lazy and didn't code the extra steps to get the CRL's (or maybe the CRL itself is inaccessible for some reason) then you're screwed.

This is one of those things that programmers would have never considered until it actually became a real issue.

Re:And how much software checks for revoked certs? (1)

Zeinfeld (263942) | more than 2 years ago | (#37271676)

Most check CRLS and OCSP.

The problem is what they do when they can't reach that data. All the browsers out there now simply fail silently and go to the site anyway.

For some reason this is seen as a problem with CAs and not the broken browsers. But from the browser providers perspective 99% of their customers are really interested in getting to sites reliably and without fuss and less than 1% are dissidents whose lives might be threatened.

This is not the fault of the guy who writes the code. They only own one small piece of the browser and do not get to make the 'commercial' decisions.

Expecting this to be any different with a DNSSEC scheme is to engage in mystical thinking of a naive variety.

X.509 is fundimentally broken (2)

subreality (157447) | more than 2 years ago | (#37270336)

How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time. That narrows the attack vector to active MITM attacks where Mallory can intercept your first connection (if they want to actually get your data) and every connection thereafter (if they don't want to be noticed). It makes widespread surveillance impossible (they'd be noticed) and targeted attacks very unlikely to succeed.

You can even add a CA to that model: have the first-time dialog be "[ nobody | ] certifies that is . Does that sound OK to you? (looks good) (hell no)". In other words, just make self-signed certs less scary, and CA-signed certs more scary... Which would accurately reflect the actual level of security you're getting: both are probably OK, and one is a little more certified but certainly not golden. Only pop up the BIG SCARY WARNING when the cert changes, even if it's signed by the CA.

Re:X.509 is fundimentally broken (1)

J0nne (924579) | more than 2 years ago | (#37270672)

Except in the case of countries like Iran and China, where they can easily do a permanent MITM attack for webmail providers if they wanted to for the first and any subsequent connections. I'm not saying the current system is perfect or even good, but your alternative is worse in many respects.

Re:X.509 is fundimentally broken (1)

subreality (157447) | more than 2 years ago | (#37271226)

1) You have an optional CA. Sites like Gmail will get a cert. That (usually) covers the initial connection.

2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

3) Even if all of the network is subverted AND all of the CAs are subverted, the MITM is still detected when people VPN to another country, or dial out, or travel, or the fingerprints are manually verified.... you can't guarantee the availability of encryption, but you can always detect widespread (country-wide) MITM attacks.

Re:X.509 is fundimentally broken (1)

ArsenneLupin (766289) | more than 2 years ago | (#37273016)

2) Pop a huge warning if the cert changes, even if the CA signs the new one. This is the really important part.

There are firefox extensions which do just that: Certificate Patrol [mozilla.org]. If a certificate changes without reason (i.e. while still being far from expiration), a warning pops up.

However, the problem with this approach is again stupidity of the webmail operators and ignorance how certificates work.

Some large webmail providers (yahoo, google, ...) who have load-balanced banks of servers sometimes have half of their servers with one certificate, and the other half with another (possibly even signed by another CA...), resulting in lots of false alarms while you unknowingly switch between both, triggering lots of false "Certificate patrol" alarms diluting their value...

Gosh, how hard is it to switch over all servers at once? Does it really have to take a week?

Re:X.509 is fundimentally broken (1)

Tomato42 (2416694) | more than 2 years ago | (#37273166)

Both CertPatrol and Convergance have to fix this problem on their side. If you have load balancing between few datacenters (like google does and few other companies, just look at Amazon web services) you don't want to use a single certificate for all of them. It's a really bad idea from security perspective.

A much better situation would be if google published a list of SHA-1 and SHA-256 fingerprints of all web server certificates they use in a single place on a server which uses EV certificate from a single CA that changes only on expiration (or compromise).

Re:X.509 is fundimentally broken (1)

ArsenneLupin (766289) | more than 2 years ago | (#37273036)

The only way this would go unnoticed is if they had the MITM already in place before hotmail or gmail existed.

... because else the early adopters will suddenly see that the certificate changed at the moment where they introduced this surveillance.

And because it is impossible to probe remotely whether a browser already has the certificate cached or not, these countries can't even selectively switch on the MITM for the "new" users.

Re:X.509 is fundimentally broken (1)

Junta (36770) | more than 2 years ago | (#37270976)

How long until we collectively admit that centralized SSL certs are actually causing more problems than they solve?

A bit harsh, but the model has some issues due to obsolete objectives.

The SSH model works great

Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM. Maybe they can't keep it up for days, but a single visit is sufficient to mess you up. If the server's key is compromised? You are pretty well screwed, as not fixing the problem *looks* more secure than if they fixed it (e.g. the big debacle when debian openssh botched the host keys and every box in the damn world had to regenerate keys). By itself, this is just self-signed certificates. It fixes none of the problems, it makes more.

In other words, just make self-signed certs less scary, and CA-signed certs more scary... Which would accurately reflect the actual level of security you're getting:

But that's just not the case, self-signed certs *shouldn't* be any less scary than at least some semblance of a CA with a diligent client pulling CRLs.

Only pop up the BIG SCARY WARNING when the cert changes, even if it's signed by the CA.

See, this discourages organizations from ever changing keys even if they think there is a *chance* they were compromised. If they realized a key was world-readable for a couple of days, in models that allow trusted key change they will change it as it isn't a terrible burden on the user and better safe than sorry. In this model, they'd conclude it is unlikely that a user got the key and assume the risk because to avoid subjecting users to scary message, inconveniencing them, and reducing their confidence in your ability to keep your credentials safe.

The problem with x509 is not with third party attestation with pre-trusted keys. The problem is the goal of operating completely detached from the authorities resulting in obscenely long validity of keys with a clumsy revocation model. DNSSEC at least addresses this by doing away with the assumption authorities must not be bothered and a public key is signed only so long as the DNS record wouldn't expire, and forces the client to talk to the authority every day (at a small incremental cost over existing DNS traffic). SSHFP records already can go in DNSSEC. If you replaced certificates with a pointer to a CA server that had to sign it every time, that would work too, but surprisingly DNS already solved the problems associated with this sort of data distribution at internet scale, so piggy-backing makes a lot of sense.

Re:X.509 is fundimentally broken (1)

Junta (36770) | more than 2 years ago | (#37271030)

I have to add that OSCP really does a lot to address the x509 issues...

Re:X.509 is fundimentally broken (1)

Tomato42 (2416694) | more than 2 years ago | (#37273200)

It does this for a very good architectural reason, it's a whitelist mechanism: you need to get a response saying "yes, we did issue this cert, it's valid", not the blacklist mechanism of CRL "we did revoke those certificates".

OCSP forces the CA to know and remember all certificates it had issued, this way even if the private key was used to create a rogue subCA it won't be valid as OCSP won't give the "yes it's valid" response. The DigiNotary case shows this is a real problem, they don't know which or how many certificates have been created. With OCSP validation model they are useless.

Now if only Firefox, Chrome, Safari and IE actually required OCSP response to mark cert as valid... By default only Opera marks connection as insecure on OCSP resolver unreachability.

Re:X.509 is fundimentally broken (1)

Junta (36770) | more than 2 years ago | (#37273746)

OCSP forces the CA to know and remember all certificates it had issued, this way even if the private key was used to create a rogue subCA it won't be valid as OCSP won't give the "yes it's valid" response. The DigiNotary case shows this is a real problem, they don't know which or how many certificates have been created. With OCSP validation model they are useless.

So I'm a little naive on the underlying tech, but should OSCP, being a whitelist, require only that valid certs be remembered, and revoked/invalid certs could be forgetten? Or is it that DigiNotary didn't retain database info even for valid certs, which would be another damning indication of their general ineptitude as a CA?

Re:X.509 is fundimentally broken (1)

subreality (157447) | more than 2 years ago | (#37271558)

Only if you habitually visit the same place does it provide any significant reduction in risk, so if you see a product you want on an as-yet unvisited storefront, you have zero protection against MITM.

Your home ISP isn't going to MITM you. They want to keep you as a customer. The coffee shop you visit isn't going to. They don't want to get prosecuted for credit card fraud. Same thing with a hotel network.

I'd expect it from random TOR exit nodes, but why would you use an anonymity network to shop with a credit card?

Passive eavesdropping is a real concern, but what's an example of a network where people would engage in active MITM attacks *hoping* that someone will try to send secret information on their very first visit to a new site?

But that's just not the case, self-signed certs *shouldn't* be any less scary than at least some semblance of a CA with a diligent client pulling CRLs.

I agree: self-signed should be slightly more scary than CA-signed. I just think that both need to move toward the center: self-signed should say "This is your first time here, and we have no way to verify who this is other than if you know this fingerprint:" vs CA-signed "This is your first time here, and says is good". Those are much more realistic, useful messages in a model that allows wider adoption of SSL everywhere than the current one, where you have a worthless CA grant certs that give absolutely no warning vs. self-signed which give the currently over-scary message.

 

See, this discourages organizations from ever changing keys even if they think there is a *chance* they were compromised.

That's a very good point. How about this: just use the CAs for revocation. The CA can't revoke a cert until you sign the revocation with your signing-key (so the CA isn't centrally-attackable to revoke the cert of a site you want to attack); and include the fingerprint of the new cert in the revocation. If the new cert matches the fingerprint, cache it and move on; if it doesn't match, big scary message.

That would limit the exploit to situations where the attacker gets a copy of the secret key from you, AND they convince the CA that they're validly revoking the cert. If the CA requires some out of band confirmation, that's a pretty tough bar to clear.

All that said, I agree that the current DNSSEC model is going to push things forward a lot.

Re:X.509 is fundimentally broken (1)

Tomato42 (2416694) | more than 2 years ago | (#37273326)

The coffee shop may not, but its customers not so much. You can't trust hotels in countries such as Burma or Lesotho (hell, you can't trust most hotels in Egypt!).

Doing ARP cache poisoning (even if you're using cable, not WiFi) is painfully easy and networks with open access can't have any measurements against them as you don't know the MACs addresses of devices that will connect to it.

Re:X.509 is fundimentally broken (1)

subreality (157447) | more than 2 years ago | (#37271612)

/. ate my angle brackets. Here's what I meant:

"[ nobody | <CA>] certifies that <fingerprint> is <domain>. Does that sound OK to you? (looks good) (hell no)"

Re:X.509 is fundimentally broken (1)

Tomato42 (2416694) | more than 2 years ago | (#37273366)

Add a big scary red warning that can't be set as "always allow" to the "You are submitting data over unsecured connection. You have no way of knowing if the website you see is served by <domain> and has not been modified in transit. Are you SURE you want to continue? NEVER continue if you entered any personal info such as names, birthdays, passwords or credit card info." when HTTP is used and we're set.

Re:X.509 is fundimentally broken (1)

ArsenneLupin (766289) | more than 2 years ago | (#37272988)

The SSH model works great: connect to a site once; verify the fingerprint once if you consider a MITM to be a reasonable concern; cache the key and know that forever after you're connecting to the same site as you did the first time.

You can (theoretically) do that too with SSL. Connect to the site, you get a certificate warning. Instead of blindly accepting the certificate, read the SHA1 fingerprint (which is displayed in the dialog box asking for acceptance), and call the helpdesk of the business with which your interacting to verify that it is the correct one. After accepting the certificate once, your browser now has it in its cache, and in knows forever (or rather: until expiration) that you're connecting to the same site as you did the first time.

Why "theoretically"? Even assuming the helpdesk wouldn't be overloaded by such requests, there's the obvious problem that the understanding about how certificates work is so poor among the general population that the helpdesk is likely to just go: "fingerprint? what's that? But don't worry: you trusted us enough to open a bank account with us, so you can trust our bank account too".

If people knew how this stuff worked, you'd get a small card with the certificate fingerprint on it from your bank when you open an account, so you could verify it the first time you connect to your bank's site.

Re:X.509 is fundimentally broken (1)

muckracer (1204794) | more than 2 years ago | (#37273192)

> The SSH model works great: connect to a site once; verify the
> fingerprint once if you consider a MITM to be a reasonable
> concern; cache the key and know that forever after you're
> connecting to the same site as you did the first time.

It works great for sites with 1 up to a few certs certs. There are distributed (Akamai-style) sites out there, that will present you a different cert with almost every page refresh! PITA... Normally hidden, since your browser will "trust" all of them anyway, but with CertPatrol etc. installed, you get an a idea just how messed up things are in the background.

Interesting thought (3, Insightful)

93 Escort Wagon (326346) | more than 2 years ago | (#37270384)

Let's say you were hoping to insinuate yourself unnoticed into traffic destined for a particular site - for the sake of argument, let's use the Tor project. What would be the best way to do this without someone suspecting you had a specific target in mind? Stealing a couple hundred certs all at once, only one of which is related to your project, comes immediately to mind.

It's not like similar approaches haven't been taken before, even in the non-digital world. I seem to recall that was one explanation John Muhammad gave for the DC Sniper attacks - he really wanted to kill his ex-wife, and hoped killing a bunch of other people would keep suspicion from him.

I don't get this (2)

HBI (604924) | more than 2 years ago | (#37270462)

Chain of events as follows:

1) Fraudulent issue of certificate
2) Revocation of certificate
3) Clients find out...how?

As an example, I downloaded the cert Google offered up on encrypted.google.com. It had no OCSP specified, but it did have a CRL specified. Now, is Firefox checking the CRL embedded in the cert or not? I think it is, but the only way to confirm would be to actually try to hit a site with a revoked cert. FF by default is configured to only use OCSP if the cert has the information embedded in it, which this Google cert didn't. Which doesn't give me the warm fuzzy about other certs, either. I checked a few others. The Verisign sites, including RapidSSL, have an OCSP URI embedded. So that's better.

My point is that the whole revocation business remains slipshod and saying that you 'revoked' the certificate doesn't mean a hell of a lot in reality.

Re:I don't get this (0)

Anonymous Coward | more than 2 years ago | (#37270512)

My point is that the whole revocation business remains slipshod and saying that you 'revoked' the certificate doesn't mean a hell of a lot in reality.

You can revoke any cert you want from your browser anytime you want.

Re:I don't get this (0)

Anonymous Coward | more than 2 years ago | (#37270946)

You don't know what revoke means.

Re:I don't get this (0)

Anonymous Coward | more than 2 years ago | (#37270774)

Firefox has no support for CRLs. It is either OCSP or nothing.

Re:I don't get this (1)

parlancex (1322105) | more than 2 years ago | (#37271260)

Just delete DigiNotar from your trusted CAs. Honestly I was just going to wait for the revocation lists like everybody else but seeing the scope of this now I think they've earned the right to be fired from the Internet forever.

Re:I don't get this (1)

thegarbz (1787294) | more than 2 years ago | (#37273276)

This is still a manual process which is great now a month after 200 certificates were actively used in the wild, and also great for those who read slashdot. I've removed it too, but what about the rest of the family who don't read Slashdot?

Re:I don't get this (2)

BZ (40346) | more than 2 years ago | (#37271284)

Yes, this is why browsers are also shipping updates with certs explicitly distrusted.... and why the fact that DigiNotar did not tell browsers about the problem a month and a half ago when it happened is such a huge issue.

Doesn't matter (0)

Anonymous Coward | more than 2 years ago | (#37270896)

I ignore those warning boxes anyway.

Manually Remove DigiNotar as a CA! (2)

trawg (308495) | more than 2 years ago | (#37271572)

Can't see anyone having posted this, but Mozilla have instructions [mozilla.com] on how to remove DigiNotar as a trusted CA in your Firefox. I'm sure other browsers have similar processes.

I also note they've just released [mozilla.com] a new Firefox (and Thunderbird) version that has removed the CA entirely - good response:

Because the extent of the mis-issuance is not clear, we are releasing new versions of Firefox for desktop (3.6.21, 6.0.1, 7, 8, and 9) and mobile (6.0.1, 7, 8, and 9), Thunderbird (3.1.13, and 6.0.1) and SeaMonkey (2.3.2) shortly that will revoke trust in the DigiNotar root and protect users from this attack. We encourage all users to keep their software up-to-date by regularly applying security updates. Users can also manually disable the DigiNotar root through the Firefox preferences.

Google and mozilla should ban the CA's IPs (1)

Marrow (195242) | more than 2 years ago | (#37271594)

Let the next time someone in that company try to "google" something be a very unpleasant experience.
Google death sentence.

Re:Google and mozilla should ban the CA's IPs (0)

Anonymous Coward | more than 2 years ago | (#37272196)

Yeah, and why not hunt down the people who run the company and execute them?
Send their ashes into the sun, burn down their house, and remove their names from all public record and make it illegal to ever speak of them again? /sarc

Idiots Gets Robbed After Leaving Front Door Open (1)

Mr. Lwanga (872401) | more than 2 years ago | (#37271868)

Blaming hackers, foreign governments and "them" after security compromised? Absolute garbage, then to follow up on their malfeasance they did not disclose the full extent of the breach.This plays like a broken record, no penalties and no responsibility.

A provider of physical security ( locksmith or alarm technician ) are bonded against screwups, why can't cert vendors do the same?

A simple idea (1)

gnasher719 (869701) | more than 2 years ago | (#37273102)

The problem here is that any CA that is in my list of root certificates is able to create a valid certificate for say www.google. com, and that some CA can be tricked into giving someone other than Google such a certificate. That is not enough, the attacker also has to redirect traffic that should go to www.google. com to their own server. The whole thing is mostly dangerous because _many_ people go to www.google. com in the first place; the same attack against say my homepage would have very little potential to cause damage.

Here is what browsers could do: Every time you visit a website and get back a certificate, record which CA issued the certificate. Then if www.google. com suddenly returns a certificate from a different CA, the browser can give you a warning. Now if I use Google to look up information about platypuses in Australia, I might not care. If I use it to find information that I know my government wouldn't want me to look at, I would be careful. If I give my credit card to Amazon and the CA has changed, I would be careful.

The attack would therefore be greatly reduced to users who just have a brand new computer with no browsing information yet.

Oh no! (0)

Anonymous Coward | more than 2 years ago | (#37273116)

Not like it matters.. see Moxie's talk on SSL.

Why not warn the users? (1)

naranek (1727936) | more than 2 years ago | (#37273506)

Why settle with just revoking the certificates? I may be wrong with this, but if the certificates are stolen and revoked, people shouldn't bump into them any longer unless they are used by the criminals. Instead of just saying "Hey, this cert isn't valid" why not put out a big warning that someone is doing nasty things on your connection right now.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...