×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

No More SSL Revocation Checking For Chrome

timothy posted more than 2 years ago | from the substitute-my-own dept.

Chrome 152

New submitter mwehle writes with this bit from Ars Technica: "Google's Chrome browser will stop relying on a decades-old method for ensuring secure sockets layer certificates are valid after one of the company's top engineers compared it to seat belts that break when they are needed most. The browser will stop querying CRL, or certificate revocation lists, and databases that rely on OCSP, or online certificate status protocol, Google researcher Adam Langley said in a blog post published on Sunday. He said the services, which browsers are supposed to query before trusting a credential for an SSL-protected address, don't make end users safer because Chrome and most other browsers establish the connection even when the services aren't able to ensure a certificate hasn't been tampered with."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

152 comments

Why? (4, Insightful)

John Hasler (414242) | more than 2 years ago | (#38955223)

...Chrome and most other browsers establish the connection even when the services aren't able to ensure a certificate hasn't been tampered with.

Why?

Re:Why? (4, Insightful)

Spad (470073) | more than 2 years ago | (#38955313)

Because otherwise (as I've discovered by switching it on in Seamonkey) about 20% of the time the connection to the CRL/OCSP server fails for whatever reason and so your site won't load, even though there's nothing wrong with its certificate.

Now you might argue that false positives are preferable to ignoring problems, but it does break the user experience pretty badly.

Re:Why? (1)

Anonymous Coward | more than 2 years ago | (#38955387)

Someone stealing your banking information is a pretty bad user experience, too.

Re:Why? (1)

alex67500 (1609333) | more than 2 years ago | (#38955793)

But if the certificate was stolen from the Bank, it's their fault, not yours.

Re:Why? (2)

93 Escort Wagon (326346) | more than 2 years ago | (#38956101)

But if the certificate was stolen from the Bank, it's their fault, not yours.

Fault is not particularly relevant when the parent comment was "Someone stealing your banking information is a pretty bad user experience, too." Sure, it's their fault. Sure, they'll eventually have to make it good. But, in the meantime, you're the one dealing with all the crap that's happened.

Re:Why? (4, Funny)

Anonymous Coward | more than 2 years ago | (#38957185)

And that explains why when my car is read-ended, it's always completely undamaged.

The magic of liability protects me.

True story.

Re:Why? (3, Insightful)

icebraining (1313345) | more than 2 years ago | (#38957647)

That's not the only way to get a compromised certificate.

Remember that any CA can create a certificate for any domain. So It might be that some attacker got hold of an intermediate CA certificate and issued a certificate for the bank's domain. Now, the CA detects the breach and revokes the intermediate certificate, but since Chrome fails to check them, it still gets accepted.

You have a full MITM scenario without any fault from the bank or the bank's CA.

Re:Why? (4, Insightful)

Imagix (695350) | more than 2 years ago | (#38955497)

Now you might argue that false positives are preferable to ignoring problems, but it does break the user experience pretty badly.

And this is the problem with security. People want the security/safety.... unless it's inconvenient. And yes, there is something "wrong" with the certificate. It is unverifiable as to whether it is still valid. Which you asked it to do.

Re:Why? (5, Insightful)

Guppy06 (410832) | more than 2 years ago | (#38955891)

The real problem with false positives isn't that they are "inconvenient" but that they breed complacency. If 99% of the alerts you get are false, what are the odds you'll actually give enough due diligence to catch the remaining 1%?

False warnings (5, Interesting)

Firethorn (177587) | more than 2 years ago | (#38957125)

I harp on this constantly. At work, we fairly routinely issue people new certificates and revoke the old ones, even when there's no belief that the certs were compromised. As a result, you can send somebody an email and later that day get new certs. This is a problem because all the digitally signed emails you sent earlier now register as revoked and Outlook proceeds to tell you this, that the email can't be trusted, etc...

This happens frequently enough that I encounter this 2-3 times a week. The email has always been valid, they just got new certs between their sending the messages and my opening the email(possibly for historical reasons).

Same deal as with the california cancer warning - stick it on EVERYTHING, and it gets ignored. If you put cancer warnings on apples, they may not pay attention to the cancer warning on that bottle of test chemical.

Re:False warnings (1, Interesting)

Imagix (695350) | more than 2 years ago | (#38957661)

So you're misusing the system, and complaining. When you revoke the old cert, you are stating that it is no longer to be trusted. And now you complain when it says "don't trust this"? I guess a car analogy: (Where I live, you are required to have proof of insurance stickers on your license plate.) You give a properly insured car to your buddy. 2 days later you go and remove the insurance stickers from the car. A week later, your friend is pissed off because the cops gave him a ticket for being uninsured. "But it was insured when I gave it to him."

Re:Why? (1)

saveferrousoxide (2566033) | more than 2 years ago | (#38956959)

It's also not just about the user experience. Keep in mind these certificates are often for businesses who want to sell stuff. If people trying to get to Bob's Computer Bits start getting cert errors, they're going to run from that site and maybe even bother to give it a terrible review making it that much less likely that someone else will choose Bob's over NewEgg.

Re:Why? (1)

Imagix (695350) | more than 2 years ago | (#38957685)

Pointing at the wrong failure. CRLs aren't the problem here. The CA that's publishing the CRL is. If the user is validating certs, and the cert cannot be validated, then they cannot know that they are dealing with Bob's Computer Bits, or Dewey Screwem and Howe.

Re:Why? (5, Insightful)

kbg (241421) | more than 2 years ago | (#38955597)

"CRL/OCSP server fails for whatever reason".

No it fails because the server administrators for the CRL are incompetent morons. A CRL server is a mission critical server that should stay up 24-7.

If Chrome and other browsers would simply display an error page with text explaining the problem and point to the offending server, I am sure the problems would be fixed very quick.

Re:Why? (2)

Anonymous Coward | more than 2 years ago | (#38955703)

Guessing you have no idea what it takes to keep a server running 24/7. There are thousands of things that can go wrong and bring down a server from simple errors or bugs to Denial-of-service attacks.

Re:Why? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38955791)

If a CA cannot keep their uptime, they shouldn't be in the business. Part of the fairly high cost of certificate purchases is the fact the CA is going to run multiple, geographically distributed data centers with adequate server coverage. That, or hire a provider that has is ready/willing/able to do this.

It is just like banks -- if a bank's server failed causing a loss of transaction info for a period of time, nobody would care how hard it is to have 99.999% uptime -- the bank failed in its duties regardless of the reason (hardware failure, Internet issues, security issues, etc.) This is just the same with CAs and revocation.

Re:Why? (2)

xorsyst (1279232) | more than 2 years ago | (#38955859)

Oh come on, it doesn't have to be a single server. Plenty of web businesses are able to manage 24-7, it's not outside the wit of man.

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38956511)

Guessing you have no idea what it takes to keep a server running 24/7. There are thousands of things that can go wrong and bring down a server from simple errors or bugs to Denial-of-service attacks.

No, *you* have no idea what it takes to keep a server running 24/7.

It's not hard - unless you think a CLI instead of a Windows GUI is hard....

Re:Why? (2)

rioki (1328185) | more than 2 years ago | (#38956807)

If twitter can run their servers at something like 99.999% then a CA can too...

Re:Why? (3, Insightful)

icebraining (1313345) | more than 2 years ago | (#38957683)

Twitter as an example of reliability? Are you joking? You do know where the expression "fail whale" came from, right?

Re:Why? (1)

Joce640k (829181) | more than 2 years ago | (#38956843)

Given how much they charge for certificates they ought to be able to set up a decent server + backup server.

Re:Why? (2)

rainer_d (115765) | more than 2 years ago | (#38957357)

A CRL is basically a flat file. It should not be difficult to make it available 24x7 - at least for someone who charges outrageous amounts of money in exchange for basically digitally signing a couple of bytes.

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38957541)

Guessing you have no idea what it takes to keep a server running 24/7.

And apparently neither do you. Do you understand what "mission critical" means? Do you think Google aims for 100% uptime or 80% uptime? What about the root name servers? Please stop posting, it's really embarassing.

Re:Why? (2)

SETIGuy (33768) | more than 2 years ago | (#38957149)

A CRL server is a mission critical server that should stay up 24-7.

In order for that to happen you'll need a significant monetary incentive based on uptime. Without that you're going to get a server that's up most of the time.

Re:Why? (1)

neonKow (1239288) | more than 2 years ago | (#38957913)

How about, "if your CRL isn't available, our browser rejects your cert and people can't get to sites that you sign?" Who is going to pay for certs at the same CA next year if your CA goes offline that it interferes with their customer base?

The monetary incentive is already there. Google is removing that incentive.

Re:Why? (1)

sgunhouse (1050564) | more than 2 years ago | (#38957391)

Opera does. Of course, it will allow you to continue to the page anyway should you choose to, but at least it told you (on by default) that the integrity of the cert could not be verified.

Re:Why? (1)

javelinco (652113) | more than 2 years ago | (#38957431)

Perhaps they mean that the connection to the CRL/OCSP server fails for some reason? Fact is, there are a LOT of reasons why getting a response might fail, that has absolutely no relation to whether the server is up or not. There are Internet routing issues for the ISP of the servers (that should be able to be handled). There are routing issues for the users of the browser. There are hacks on the users' system that prevent hitting that site. There are intermittent issues with connectivity due to communication stacks on the client that has nothing to do with the browser, but might be due to driver or software issues - or even hardware issues. Bottom line - problems connecting to a given server cannot be solved by the server administrators alone.

Re:Why? (5, Insightful)

Hentes (2461350) | more than 2 years ago | (#38956141)

They could load the site and simultaneously display a small warning, thus letting the users decide whether they want to trust it or not. Loading an untrusted is not a tragedy by itself.

Re:Why? (1)

rioki (1328185) | more than 2 years ago | (#38956841)

As a matter of fact, I would like this feature. If my bank site fails, I will not use it. If some crummy board fails, who cares...

Re:Why? (3, Insightful)

Joce640k (829181) | more than 2 years ago | (#38956881)

At the very least don't display the padlock icon as if everything is cool.

(Also, keep retrying the certificate request to see if it succeeds. Change the padlock color when it does).

vs. Google Analytics (1)

geoffrobinson (109879) | more than 2 years ago | (#38957619)

I bet Chrome won't be skipping over Google Analytics or their ad engine, which slows down my page breaking my experience something fierce.

Re:Why? (0)

OverlordQ (264228) | more than 2 years ago | (#38955323)

Because blaming CAs is easier then blaming your own product.

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38955925)

Because blaming CAs is easier then blaming your own product.

And posting dipshit comments is easier than actually reading the article.

Here's a taste of why you just made yourself look like a complete shitwad:
"Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons."
or
"There is a class of higher-security certificate, called an EV certificate, where we haven't made a decision about what to do yet."
or
"Microsoft, Opera and Firefox also push software updates for serious incidents rather than rely on online revocation checks."

or maybe
"The problem with these checks, that we call online revocation checks, is that the browser can't be sure that it can reach the CA's servers. There are lots of cases where it's not possible"

But the most important point is that nowhere does anybody "blame the CA's".

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38956219)

Michael Kristopeit, is that you?

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38955951)

Yeah, cause those CA's verrry trustworthy.

Except when they aren't:

VeriSign Admits Multiple Hacks in 2010, Keeps Details Under Wraps [pcworld.com]

Verisign the company responsible for guiding most of the world's Internet users to the correct websites and once the largest encryption certificate issuing authority, has acknowledged that it was successfully hacked several times in 2010.

The admission was disclosed last fall in a VeriSign filing with the U.S. Securities and Exchange Commission (SEC), but did not come to light until today when Reuters reported on its investigation of new SEC guidelines on such disclosures.

"In 2010, the Company faced several successful attacks against its corporate network in which access was gained to information on a small portion of our computers and servers," said VeriSign in the quarterly report it filed with the SEC in October 2011.

VeriSign Hacked: What We Don't Know Might Hurt Us [pcworld.com]

Let’s start with what (little) we know. The disclosure did not happen as a result of VeriSign discovering the breach and taking responsible, proactive action to alert customers and address the situation. No, VeriSign buried the information in a quarterly Securities and Exchange Commission (SEC) filing as if it was just another mundane tidbit.

IT staff at VeriSign allegedly discovered the compromise in 2010, but hid the incident from upper management until sometime in 2011. VeriSign itself may not be at fault for the initial delay in disclosure, but it appears that a significant amount of time has passed since VeriSign executives learned of the breach, and yet the company still tried to sneak the information covertly in an SEC filing.

I used to have both options enabled in Firefox to validate certs with a OSCP server and if there's a problem to automatically treat certificate as invalid. I disabled both options after reading about Versign's despicable coverup. What's the point? Based on this episode and past behaviour, its clear CA's would rather put every internet user at risk, rather than taking responsible steps to alert users and their own customers. They only disclose compromises to their systems when some outside source forces them to. They're completely untrustworthy.

Re:Why? (1)

Hawke (1719) | more than 2 years ago | (#38956153)

Just FYI, depending on exactly when in 2010 it was hacked, Verisign may not have been in the certificate business. Symantec purchased the business in May of 2010, and IIRC the operational transfer happened pretty quickly.

That "just" leaves the DNS system as a possible valid target. You know, the system that's probably more important than SSL.

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38956957)

From Verisign's 10-Q report on Oct. 28

We experienced security breaches in the corporate network in 2010 which were not sufficiently reported to Management.

In 2010, the Company faced several successful attacks against its corporate network in which access was gained to information on a small portion of our computers and servers. We have investigated and do not believe these attacks breached the servers that support our Domain Name System (“DNS”) network. Information stored on the compromised corporate systems was exfiltrated. The Company’s information security group was aware of the attacks shortly after the time of their occurrence and the group implemented remedial measures designed to mitigate the attacks and to detect and thwart similar additional attacks. However, given the nature of such attacks, we cannot assure that our remedial actions will be sufficient to thwart future attacks or prevent the future loss of information. In addition, although the Company is unaware of any situation in which possibly exfiltrated information has been used, we are unable to assure that such information was not or could not be used in the future.

The occurrences of the attacks were not sufficiently reported to the Company’s management at the time they occurred for the purpose of assessing any disclosure requirements. Management was informed of the incident in September 2011 and, following the review, the Company’s management concluded that our disclosure controls and procedures are effective. However, the Company has implemented reporting line and escalation organization changes, procedures and processes to strengthen the Company’s disclosure controls and procedures in this area. See Item 4 “Controls and Procedures” in Part I of this report.

And from the Reuters article that broke the story
http://www.reuters.com/article/2012/02/02/us-hacking-verisign-idUSTRE8110Z820120202

Symantec spokeswoman Nicole Kenyon said "there is no indication that the 2010 corporate network security breach mentioned by VeriSign Inc was related to the acquired SSL product production systems."

Symantec:IOW, they didn't tell us if the breach involved SSL when we bought the service.

The point is Versign wasn't forthcoming at all about the breach when it occurred which seems to be SOP for CA's

Re:Why? (1)

nedlohs (1335013) | more than 2 years ago | (#38955343)

Why not read the blog post?

Or if you are too lazy to click a link think about it for a second. Hint: should every site with an SSL cert from X not work because X is unreachable for whatever reason right this second?

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38955403)

Why not read the blog post?

Or if you are too lazy to click a link think about it for a second. Hint: should every site with an SSL cert from X not work because X is unreachable for whatever reason right this second?

Are you high? Of course it shouldn't.

Re:Why? (4, Insightful)

kbg (241421) | more than 2 years ago | (#38955677)

Yes it should. CRL server for X is mission critical and should always work. There is no excuse for it not working.

Re:Why? (1)

SETIGuy (33768) | more than 2 years ago | (#38957177)

How much are you willing to pay for it to always work?

Re:Why? (2)

icebraining (1313345) | more than 2 years ago | (#38957807)

CAs get up to hundreds of dollars per certificate. Whatever they need to keep a damn static file with 100% uptime has been more than paid.

Re:Why? (1)

SETIGuy (33768) | more than 2 years ago | (#38957893)

Apparently they don't have sufficient incentive to achieve 100% uptime. If you want 100% uptime, you need to change the way certificates are purchased.

Re:Why? (5, Insightful)

Richard_at_work (517087) | more than 2 years ago | (#38955423)

Yes. Because if you are in a MITM position to inject your own compromised cert for site Y, then you are also in the perfect position to deny access to the cert validation servers to stop the validation happening.

The solution is more resilient servers and services, not eliminating the checking.

Re:Why? (3, Insightful)

vlm (69642) | more than 2 years ago | (#38955553)

The solution is more resilient servers and services, not eliminating the checking.

Such as, say, having the Mighty GOOG distribute that "CRL in all but name". Which brings us full circle back to the original article, and what they're doing.

Re:Why? (5, Informative)

Baloroth (2370816) | more than 2 years ago | (#38955589)

Hint: should every site with an SSL cert from X not work because X is unreachable for whatever reason right this second?

Yes. Anyone conducting a MITM attack is practically necessarily in control of the users network, and will just block access to the CRL, which means they will never stop MITM attacks unless you do exactly that. And yes, I know that is the point of the change. My point is: they choose the wrong fix. Sites should only be listed as trusted if the browser really knows they can be (so far as possible, of course). Being "Secure" should meet a minimum standard, and failing that standard means the site should not be listed as "secure", but most browsers do. Choosing to simply ignore part of the established SSL standard is not the solution.

Opera does precisely this. It still used HTTPS (I think), but it doesn't list the page as being secure, since the page really has exactly the same security as any non-https site (for trust purposes).

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38957439)

That sounds very practical and fitting with the intent of all those "padlock/green-bar/etc" icons in the browser window. If you can't validate the site is trusted, don't list it as trusted, but don't block access (unless HSTS is active).

Re:Why? (1)

Anonymous Coward | more than 2 years ago | (#38957865)

Should you still send cookies to it, though? Should it get the “https://example.org” origin or be downgraded to “http://example.org”?

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#38955809)

Yes. Every site with an SSL certificate signed by X should not work because X was unreachable. Absolutely. Security fails safe and that means deny the connection if you can't verify the certificate hasn't been revoked.

Re:Why? (1)

betterunixthanunix (980855) | more than 2 years ago | (#38955469)

Convenience. Users care more about convenience than security, so if a browser actually takes action on OCSP or CRL issues, the user will just switch to a competing browser that does not.

And nothing of value was lost... (1)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#38955229)

Given the painful uselessness of CRLs as presently implemented(we obviously need some way of revoking the things; but the present one is agonizingly broken), I'm just not too sad about the prospect of no longer telling Verisign every time I visit one of their SSL-cert customers(the same is true of all the other certificate mongers who publish CRLs)...

What? (3, Interesting)

OverlordQ (264228) | more than 2 years ago | (#38955279)

He said the services, which browsers are supposed to query before trusting a credential for an SSL-protected address, don't make end users safer because Chrome and most other browsers establish the connection even when the services aren't able to ensure a certificate hasn't been tampered with.

So he admits Chrome is broken, so he doesn't fix it and blames the CA's . . makes sense.

Chrome will instead rely on its automatic update mechanism to maintain a list of certificates that have been revoked for security reasons. Langley called on certificate authorities to provide a list of revoked certificates that Google bots can automatically fetch.

So basically he wants CRLs? I thought he didn't want CRLs?

Re:What? (4, Informative)

Kohenkatz (1166461) | more than 2 years ago | (#38955467)

What he wants is CRLs stored on the local machine instead of querying a web service.

Re:What? (2)

Jon Stone (1961380) | more than 2 years ago | (#38956919)

CRLs are revocation lists which used to be published by CAs and clients were able to periodically download.

As a concept they were replaced with OCSP (online certificate status protocol). Here the client requests the current status of a certificate each time they are presented with it. The idea was that it would be more timely and up to date and meant CAs didn't need to publish a complete list of revoked certificates.

Now it seems Chrome wants to go back to a bodged version of the old way of doing things where Chrome periodically requests the CRL from the browser vendor or Chrome is periodically updated with the latest CRL?

Re:What? (4, Informative)

vlm (69642) | more than 2 years ago | (#38955495)

So basically he wants CRLs? I thought he didn't want CRLs?

Not want CRLs distributed from sites no one cares about.

CRLs fail unlocked, so to speak. So if you can't pull a CRL from a CA the browser goes on its merry way. So if you're pulling a MITM attack using a known compromised cert, "everyone knows" you just block access to the CA. End users will never notice. 99.9999% of end users will never visit anything with a *.verisign.com domain.

However, if you block access to www.google.com or plus.google.com or gmail.com because they're distributing a meta-CRL THEN "most" users will notice the might GOOG is dead.

So you start with a web of trust where no one cares if any of the threads are cut. Thats not gonna work. So how bout piggybacking the web of trust on top of a Very Popular Site. Being a GOOG guy (I think?) he suggests his employer, although I know of no technical reason why itunes.apple.com or microsoft.com couldn't also distribute CRLs.

Now if you want to pull a MITM attack its not enough to null route the CAs, you can't null route the Mighty GOOG without the users noticing, so you have to do something much more sophisticated to block access to the most recent CRL.

The funny part is now all the noobs who report internet outages as "google is down" are going to have to wonder, is someone trying to pull a MITM or is it just noob-speak for an internet outage...

Re:What? (1)

NatasRevol (731260) | more than 2 years ago | (#38955825)

CRLs fail unlocked because that's what the browsers set as the default to avoid inconvenience.

Now, if the CRLs are local, will they still fail unlocked if they're missing or corrupt? i.e. next virus target.

If so, then they've done nothing to help the fundamental problem of insecure security.

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38956133)

You really think that "google is down" would be enough of a tip off to your average internet user that someone is doing a man in the middle attack on them?

Shit, you geeks need to get out and socialize with the common folk more

Re:What? (2)

vlm (69642) | more than 2 years ago | (#38956397)

Once it gets escalated up to 2nd, 3rd, 4th tier support at the ISP that $bigcustomer has no GOOG access, and the 4th tier engineer checks his BGP and its all good and is scratching his head in mystification as to why $bigcustomer can't access certain GOOG ip addrs, maybe, just maybe, he'll remember this /. thread and catch a MITM in progress. Maybe.

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38956261)

The terms are fail open and fail closed.

Re:What? (1)

girlintraining (1395911) | more than 2 years ago | (#38956357)

99.9999% of end users will never visit anything with a *.verisign.com domain.

There are over 2 billion internet users... so you're saying only 200 people in the world will visit that site? Is now a bad time to note that millions of websites have a 'Verisign Approved' widget on their site that has a referred URL back to the *.verisign.com domain?

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38956541)

I know of no technical reason why itunes.apple.com or microsoft.com couldn't also distribute CRLs.

Sure they can, but querying a competitor's server for all SSL domains Chrome users want to visit doesn't sound like a good bussiness if you're Google...

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38957233)

I know of no technical reason why itunes.apple.com or microsoft.com couldn't also distribute CRLs.

Sure they can, but querying a competitor's server for all SSL domains Chrome users want to visit doesn't sound like a good bussiness if you're Google...

It would be a good way to get accused of mounting a DDoS attack against Apple or Microsoft.

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38955569)

It's not Chrome that's broken you idiot, it's the CAs. It's worthless doing cert revoke checking if a fifth of the internet stops working. And it's not like Chrome is going to piss off a bunch of their users just to make the CAs react.

Re:What? (1)

Baloroth (2370816) | more than 2 years ago | (#38955611)

He doesn't want CRLs. Chrome (and many other browsers) already use these kinds of blocklists, so basically he just wants to not use CRLs at all.

Re:What? (0)

Anonymous Coward | more than 2 years ago | (#38955615)

While the solution is to make the servers for CRL's more resliant, what about doing a peer-to-peer type solution so there could be a network of servers to check?

For DNS, we don't rely on one server for .com, we made a distributed system and secured it (... are securing.. some of it). Sign the reovcation and then I don't care where it comes from, if I find it I know not to trust the certificate. It may not be perfect, but it would at least be a step in the correct direction.

Doesn't work. (0)

Anonymous Coward | more than 2 years ago | (#38955923)

If the certificate is revoked... the Certificate you retrieve for validation can easily be for the bad one.

DNS has this happen all the time. A host changes IP... It can take several weeks before that propagates to the users.

Revocations need to be as close to immediate as possible, which is why OCSP was created... unfortunately, they didn't take into consideration what happens when a server is subject to a DDS due to everyone trying to retrieve the certification lists... Nobody has enough bandwidth to support it without failures.

Re:What? (1)

thue (121682) | more than 2 years ago | (#38956029)

> So he admits Chrome is broken, so he doesn't fix it and blames the CA's . . makes sense.

If the CAs' blacklists worked reliably, then chrome wouldn't need to ignore when they were down. So it is the CAs' fault.

Great idea. (3, Insightful)

Targen (844972) | more than 2 years ago | (#38955305)

Chrome and most other browsers establish the connection even when the services aren't able to ensure a certificate hasn't been tampered with.

And the solution, obviously, is not checking at all. Slick.

Re:Great idea. (2)

mlts (1038732) | more than 2 years ago | (#38955887)

I'm guessing the fact that a top level certificate compromise is something that is to be ignored. We already went through this with a CA that got bankrupted due to security issues. Web browsers not dealing with revoked keys will just add significantly to the time that blackhats can MITM stuff.

The solution? I would say that SLCs (short-lived certificates) might be the best thing, with a mechanism to replace browser root keys periodically. Every time the browser is updated, CAs have new root keys. This way, a compromised root key will be replaced in short order, and if the root key is sound, having intermediate CA keys with a lifetime of hours to days would be the thing to do. This way, if a key is compromised, it will expire in a very short amount of time, even if there is no ability to connect to a revocation server.

Of course, there are holes in this -- fetching keys more often for example will generate more traffic.

Re:Great idea. (1)

Chryana (708485) | more than 2 years ago | (#38957523)

This doesn't look like a bad idea... But the thing is, Google wants to get rid of on-the-fly verification of revocation certificates, and you suggest on-the-fly reception of short lived certificates, so it might run into similar issues as the current system. Remember, a revocation list is permanent, so you can just download the latest updates to it, which should not be too bandwidth intensive if you do it every time you start your browser. A list of active certificates could not be kept, and would have to be downloaded anew every time (or rather received on the fly, because it would probably be way too big).

Re:Great idea. (1)

MozeeToby (1163751) | more than 2 years ago | (#38956471)

No, the solution is checking at update time and storing the list of revoked certs locally so that you don't need to rely on the CLR server being available (which is something a man in the middle would be able to disrupt anyway).

Certificates are useless (0)

Anonymous Coward | more than 2 years ago | (#38955411)

Once a Lulzsec type organization got your private keyz you are fucked.

Being Google (1)

Klync (152475) | more than 2 years ago | (#38955453)

It's so easy to turn the Internet into whatever you want it to be, when you're the largest advertiser, largest service provider, largest search engine, largest content provider, software maker, hardware-platform-vendor, and even an ISP.

Have we reached the point where google's "too big to fail"?

Re:Being Google (4, Informative)

Spad (470073) | more than 2 years ago | (#38955517)

All they're really doing is moving the certificate revocation checks from the client to the server; Google updates its own CRL and pushes it to Chrome so that the browser doesn't have to rely on potentially unresponsive 3rd party sites for its checks.

Re:Being Google (4, Insightful)

Ferzerp (83619) | more than 2 years ago | (#38956407)

Except now Google is presenting itself as an authority on the status of certificates that it has no business doing so with to the users of chrome.

This is a bad thing.

Re:Being Google (1)

Anonymous Coward | more than 2 years ago | (#38957057)

Except now Google is presenting itself as an authority on the status of certificates that it has no business doing so with to the users of chrome.

They already do that by shipping the browser with approved CAs. Are you suggesting that browser makers should stop doing that too?

Re:Being Google (3, Insightful)

swillden (191260) | more than 2 years ago | (#38957401)

Except now Google is presenting itself as an authority on the status of certificates that it has no business doing so with to the users of chrome.

This is a bad thing.

Google is already the authority which decides which CAs will be trusted by Chrome. How does it really change anything if Google also collects the CA CRLs and pushes them to the browser? Other than making revocations much more reliable.

Re:Being Google (1)

Ferzerp (83619) | more than 2 years ago | (#38955523)

This. It should not be within the realm of Google's purview to rewrite standards on an adhoc basis.

Re:Being Google (0)

Anonymous Coward | more than 2 years ago | (#38955643)

I hope so because I can't live unless I can get what I want from the Internet in four keystrokes or less 90% of the time.

its all about making the browser fast. (1)

Anonymous Coward | more than 2 years ago | (#38955521)

I think Google is just trying to make the browser fast. Updating the CRL "out of band" always runs the risk of using an outdated CRL. I think they are trying to explain their decision by saying that the "in-line" check is anyways not 100% secure (so why waste precious milliseconds).

A single decade...maybe one and a half. (1)

Remus Shepherd (32833) | more than 2 years ago | (#38955657)

"Google's Chrome browser will stop relying on a decades-old method for ensuring secure sockets layer certificates are valid..."

'Decades'? As in more than one?

The first web browser was made by Tim Berners-Lee in 1991. That's technically two decades ago...but were there secure sockets? Layers? Certificates?

Yeah, I'm nitpicking. But the web didn't exist publically before 1994 -- I remember formatting HTML for Mosaic back then, as our company tried to keep on top of the bleeding edge. This stuff really wasn't that long ago. Correcting 'decades' is just a nitpick, but if you start using 'centuries' or 'eons' then this old man is going to have to get out of his chair and start giving history lessons.

Yeah decades. (4, Informative)

Anonymous Coward | more than 2 years ago | (#38956057)

X509 certificates go back to July 3, 1988.

That makes certificates (and their revocation) 24 years old.

So yes, decades old.

He's right (4, Informative)

HBI (604924) | more than 2 years ago | (#38955681)

CRLs and OCSP are functionally useless. For PKI to work, certificate revocation must work also. Some kind of reliable system has to be constructed. Chrome is doing what they need to do to make this happen by abandoning the useless, outdated technologies of the past.

Before someone asserts otherwise, explain DigiNotar. While you are at it, explain all the rest of the CA compromises over the last two years. Then explain why each browser essentially had to distribute a patch to fix the problem rather than relying on OCSP and CRLs. If they are functional, that wouldn't have been necessary.

Re:He's right (3, Insightful)

xorsyst (1279232) | more than 2 years ago | (#38955823)

Opera didn't have to distribute a patch, because they use OCSP and CRLs properly. And I've never heard of anyone complaining that it causes a problem.

Re:He's right (1)

HBI (604924) | more than 2 years ago | (#38955913)

It's not possible that Opera uses OCSP and CRLs "properly". The simple reason why is that many certificates have no OCSP or CRL specified.

Can't use what's not there.

You should probably examine your security posture if you think you are adequately covered by Opera's handling of certificate revocation.

Re:He's right (0)

Anonymous Coward | more than 2 years ago | (#38956985)

It's not possible that Opera uses OCSP and CRLs "properly". The simple reason why is that many certificates have no OCSP or CRL specified..

So some CAs trust their customers to never lose their certs to the bad guys? How is this Opera's fault?

Re:He's right (0)

Anonymous Coward | more than 2 years ago | (#38955929)

We have a system now. It is called DNSSEC. Allow domain owners to manage their certificates via DNSSEC records. No need for CAs.

Re:He's right (0)

Anonymous Coward | more than 2 years ago | (#38955967)

Because DigiNotar was a Certificate Authority and embedded and trusted by your device. They had to remove it from the trusted CA list in the browser since there is no master CA revocation list. I don't believe the CRL process checks if a CA certificate has been revoked (who would it contact??), or if there is any mechanism in the current OCSP to do that either. The intention is for the trusted CA to publish a list of revoked certificates, but if the CA certificate is compromised then you must manually remove the compromised CA certificate from your browser. I suppose the fix is to stop trusting so many CA root certificates in browsers by default and instead rely on a small handful of trusted CAs. Lesser trusted CAs can have their certificate signed by one of the more trusted CAs in the chain. There should probably only be 1 or maybe 2 CAs in the world... probably run by the UN and/or USA government. (LOL, just kidding).

Re:He's right (0)

Anonymous Coward | more than 2 years ago | (#38956595)

You are incorrect. CRL and OCSP work just fine. The problem is that until the recent CA compromises became well known, most browsers didn't do them. That is why the compromises were effective. If browsers had been routinely using OCSP, the compromised CA could have revoked a bunch of certs, published to their OCSP servers, and the problem would have been stopped dead in its tracks.

Chrome is now compounding the problem rather than improving on it. We want everyone to use OCSP all the time so that if someone has not properly deployed, scaled and managed their farm of OCSP responders, people will notice and complain.

OCSP is also preferable to CRLs in many cases because the size and frequency of updates making maintaining up-to-date CRL lists in the field problematic.

This sounds like yet-one-more-reason to never use Chrome. Google half-baked engineering at its best...

Misleading headline as usual (0)

Anonymous Coward | more than 2 years ago | (#38955711)

Google will parse CRLs on their servers and push updates to Chrome instead of having Chrome poll on demand, because on-demand polling is most likely to soft-fail in the one situation where revocation matters, which is when your connection is being hijacked. All the major browsers already have a baked in revocation list that updates when there are major incidents. Chrome will just start pushing theirs outside of the normal browser update channel and stop on-demand CRL checks because those can't protect the user in an attack scenario; the soft-fail state is relied upon by too many existing bits of web infrastructure.

This makes sense in the case described... (1)

wolrahnaes (632574) | more than 2 years ago | (#38956027)

But it's not like a local attacker intercepting communication at your end is the only possible option. What if the datacenter the server is hosted in or an ISP along the path has been compromised? What if the target site's DNS has been modified to point to the attacker? There are many possible ways that an attacker could cover only parts of the internet or only the specific target itself, still allowing full access to the CRLs and thus allowing them to do their jobs.

That said, I can't argue with the privacy point.

IMO since the privacy concern is legit and it is true that it's not as useful as some might have believed, it should be made optional rather than removing it outright. Even understanding the situations where it doesn't work, there are still situations where it does.

On that note, using CRLs alone rather than OCSP eliminates the privacy concern to a substantial degree as then the CA only knows you accessed a site using a certificate that points at that CRL, not which certificate you're using. Of course the tradeoff is that it requires downloading the whole list every time you need it, so a whole different can of worms comes up with caching versus the ability to rapidly revoke certs.

A better fix (2)

MobyDisk (75490) | more than 2 years ago | (#38956665)

because Chrome and most other browsers establish the connection even when the services aren't able to ensure a certificate hasn't been tampered with.

This is just a case of unsafe defaults. To fix this in Firefox go to Tolls - Options - Advanced - Encryption - Validation and check the box that says "When an OCSP server connection fails, treat the certificate as invalid."

This is probably what the default should be anyway. I cannot imagine a fingerprint scanner that just assumed everyone was authorized if the database went down. If it can't validate, then it isn't valid!

Re:A better fix (0)

Anonymous Coward | more than 2 years ago | (#38957949)

To fix this in Firefox go to Tolls - Options - Advanced - Encryption - Validation and check the box that says "When an OCSP server connection fails, treat the certificate as invalid."

Is there a similar fix for Chrome users?

Re:A better fix (1)

Anonymous Coward | more than 2 years ago | (#38957999)

To fix this in Firefox go to Tolls - Options

No way; that's too expensive.

Only (0)

Anonymous Coward | more than 2 years ago | (#38956837)

When you don't change default values in the browser (in case of FF at least), you just need to check one checkbox to make it treat unresponding OCSP servers as a validation error.

All Banking sites will now exclude Chrome (2)

Culture20 (968837) | more than 2 years ago | (#38957761)

Chrome: "We're not wearing eye-gear on the paintball field because we all shoot at torsos"
Banks: "That's nice. You're not playing on the paintball field without eye-gear."
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...