Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

HTTP Strict Transport Security Becomes Internet Standard

samzenpus posted about a year and a half ago | from the way-it-is dept.

Security 98

angry tapir writes "A Web security policy mechanism that promises to make HTTPS-enabled websites more resilient to various types of attacks has been approved and released as an Internet standard — but despite support from some high-profile websites, adoption elsewhere is still low. HTTP Strict Transport Security (HSTS) allows websites to declare themselves accessible only over HTTPS (HTTP Secure) and was designed to prevent hackers from forcing user connections over HTTP or abusing mistakes in HTTPS implementations to compromise content integrity."

cancel ×

98 comments

Server Load (1)

jasonvan (846103) | about a year and a half ago | (#42073215)

Isn't the point of mixed web sites to lessen server load from https? I was always under the impression a mixed environment only using https when necessary was a better idea. Obvoiusly not mixing SSL and non on any single page like the article mentions, but wouldn't just be as effective to advocate for better SSL implementations?

Re:Server Load (4, Informative)

Chrisq (894406) | about a year and a half ago | (#42073241)

Isn't the point of mixed web sites to lessen server load from https? I was always under the impression a mixed environment only using https when necessary was a better idea. Obvoiusly not mixing SSL and non on any single page like the article mentions, but wouldn't just be as effective to advocate for better SSL implementations?

No, mixed web sites were never recommended and many browsers will give a "mixed content" warning. The overhead isn't that high, Google commented after its switch to https only for gmail: [techie-buzz.com]

all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

Re:Server Load (2)

jasonvan (846103) | about a year and a half ago | (#42073275)

Very interesting article. Makes me realize I never personally did benchmarks of secure vs non. Maybe it's just kind of a "word on the street" type phenomenon from more senior admins than myself.

Re:Server Load (1)

Anonymous Coward | about a year and a half ago | (#42073321)

You can see "delay" with https sites easily, no benchmarks required either. It's just the performance price paid for the (hopefully) added security.

Re:Server Load (5, Informative)

Chrisq (894406) | about a year and a half ago | (#42073403)

You can see "delay" with https sites easily, no benchmarks required either. It's just the performance price paid for the (hopefully) added security.

Yes there is added latency due to the handshake, though on my broadband connection I can't say that I can see it. Google has proposed and is implementing [imperialviolet.org] several standards to reduce this delay though. Of course the biggest reduction in the effects of latency came with "Keep Alive" which we have now had for years.

Re:Server Load (2)

jasonvan (846103) | about a year and a half ago | (#42073553)

Yeah, I can't say I really notice a huge difference when switching to SSL on the page load front either. Being a self-centered asshole, I was more worried about CPU time. More CPU time means server upgrades which means more time working and less time pretending to work.

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42073699)

You can see "delay" with https sites easily, no benchmarks required either. It's just the performance price paid for the (hopefully) added security.

Yes there is added latency due to the handshake, though on my broadband connection I can't say that I can see it. Google has proposed and is implementing [imperialviolet.org] several standards to reduce this delay though. Of course the biggest reduction in the effects of latency came with "Keep Alive" which we have now had for years.

SSL session caching/reuse will also probably help on top of "Keep Alive".

Re:Server Load (1)

marcosdumay (620877) | about a year and a half ago | (#42073405)

To be fair, the myth that SSL requires too much hardware is rooted on reality. It was once true, at the 90's.

That article has a big WTF:

So did Google just compromise speed for security? Definitely not.

That after explaining how SSL increases latency. And for supporting the stated fact that they didn't compromisse speed, the article comes with a pair of non-sequitours: Google uses chaches, and they optmized it to use much less memory.

(Just a note... Yes, SSL does increase latency, and no, it's not enough to notice in a well written page. But if you go making a ton of different connections, it will be quite noticeable.)

Recommendation and ads (1)

tepples (727027) | about a year and a half ago | (#42074083)

Yes, SSL does increase latency, and no, it's not enough to notice in a well written page. But if you go making a ton of different connections, it will be quite noticeable.

So what's the proper way to incorporate advertisements (which pay the writing and hosting bills) and recommendation widgets (which attract readers) "in a well written page"? Those tend to make "a ton of different connections".

Re:Recommendation and ads (1)

Chrisq (894406) | about a year and a half ago | (#42074111)

Yes, SSL does increase latency, and no, it's not enough to notice in a well written page. But if you go making a ton of different connections, it will be quite noticeable.

So what's the proper way to incorporate advertisements (which pay the writing and hosting bills) and recommendation widgets (which attract readers) "in a well written page"? Those tend to make "a ton of different connections".

The "ton of different connections" is a two-edged sword. Obviously each needs to establish a session, and incurs a concurrency overhead. On the other hand requests can be overlepped past the "connection per server" limit you would get if they all came from the same site".

Re:Recommendation and ads (1)

marcosdumay (620877) | about a year and a half ago | (#42074549)

Most of the time, the proper way to incorporate those things is by assynchronous requests after the page loads.

Not everybody is lucky enough to get permission to do that with ads, but there is no reason to make the user wait those social network widgets to load before he can see your page.

Asynchronous iframe loading and reflows (1)

tepples (727027) | about a year and a half ago | (#42074953)

Most of the time, the proper way to incorporate those things is by assynchronous requests after the page loads.

The implementation on (for example) Cracked.com of asynchronous requests to Facebook and other social networks has caused article pages to reflow several times, and when these reflows move the navigation buttons as they tend to do, I end up clicking links other than the one I intended to click.

Re:Asynchronous iframe loading and reflows (1)

TheLink (130905) | about a year and a half ago | (#42075903)

I end up clicking links other than the one I intended to click.

On some sites that might be considered a feature to them.

Click fraud (2)

tepples (727027) | about a year and a half ago | (#42075959)

Accidental clicks on advertisements and "click fraud" look the same to a pay-per-click ad network.

Re:Asynchronous iframe loading and reflows (1)

marcosdumay (620877) | about a year and a half ago | (#42078735)

You know you can fill the space taken by those widgets before loading them, right?

(I don't doubt any problem you may have had doing this, I'm just pointing it out.)

Re:Asynchronous iframe loading and reflows (1)

tepples (727027) | about a year and a half ago | (#42078969)

You know you can fill the space taken by those widgets before loading them, right?

I think one of the Facebook widgets has one height for an active Facebook user (in which case a bunch of recommendation options presumably pop up) and another height for someone who has no Facebook account (in which case "Sign up for Facebook to see what your friends like" appears). But I don't feel like signing up for a Facebook account and giving Facebook my cell phone number just to verify this. I haven't even dug into the Google+ account that I have.

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42073445)

Meaning that Googles own load takes 99% of the CPU load.

For people with less CPU intensive pages, the numbers will be different. For static pages, the load generated should be very low (if not, you're using the wrong server software), and that could easily be as low as the SSL overhead, meaning each will be 50% of the total load. In that case, switching to SSL means doubling the number of active CPUs.

Re:Server Load (1)

Chrisq (894406) | about a year and a half ago | (#42073609)

Meaning that Googles own load takes 99% of the CPU load.

For people with less CPU intensive pages, the numbers will be different. For static pages, the load generated should be very low (if not, you're using the wrong server software), and that could easily be as low as the SSL overhead, meaning each will be 50% of the total load. In that case, switching to SSL means doubling the number of active CPUs.

I wouldn't have thought that gmail was that CPU-intensive on the front-end server. Based on AJAX front-ends I have worked on a lot of the work will be at the data retrieval level and the servers are more likely to be IO bound that CPU bound. It would be interesting to have some figures for the cpu load increase for static content. BTW one way to reduce the CPU load is to disable the 3DES protocol options. Newer protocols are more secure and use much less processor time.

Re:Server Load (1)

rioki (1328185) | about a year and a half ago | (#42074027)

Except that, to properly serve via SSL and not have a mixed content warning, you need to serve everything via SSL from the same domain. That means, that you also need to serve static files via SSL. If you have high CPU load on your server that serves static files, you are doing something wrong. Then again, where you server static files the CPU should not be the bottleneck, thus adding SSL may actually not hurt. But as always, it depends on your use case. Adding SSL is not a free operation; but it may be sufficiently small that you notice.

Re:Server Load (1)

kqs (1038910) | about a year and a half ago | (#42075543)

Wait, do you really think that the same machines are handling google's SSL and google's content? Or amazon's, facebook's or any other massive web properties?

There may be SSL hardware to help the systems, but otherwise the SSL slowdown was always more about latency than CPU.

Re:Server Load (1)

lgw (121541) | about a year and a half ago | (#42075597)

For people with less CPU intensive pages, the numbers will be different. For static pages, the load generated should be very low (if not, you're using the wrong server software), and that could easily be as low as the SSL overhead, meaning each will be 50% of the total load. In that case, switching to SSL means doubling the number of active CPUs.

For low-cpu static content CPU load is a non-issue. If you can saturate your pipe with 1% CPU, are you going to even notice that doubles to 2% CPU?

Not that it matters in the first place if you terminate SSL in your load balancer, as is commonly the case for non-financial stuff.

Re:Server Load (1)

Anonymous Coward | about a year and a half ago | (#42073743)

The problem with https isn't the server load per request but the additional server load due to not being able to cache resources as efficiently.

What breaks HTTP cache control? (1)

tepples (727027) | about a year and a half ago | (#42074095)

Since when do standard HTTP cache control headers, such as Expires at the end of next year, work less efficiently when the HTTP is encapsulated in TLS?

Re:What breaks HTTP cache control? (2)

MikeBabcock (65886) | about a year and a half ago | (#42074817)

Since the cache servers in between the client and the server can't cache the content for multiple users.

Oh, you thought only browser caches mattered.

Consider the still excellent though ancient http://www.ircache.net/ [ircache.net]

Who is operating these caching proxies? (1)

tepples (727027) | about a year and a half ago | (#42075001)

Since the cache servers in between the client and the server

Who is operating these cache servers you're talking about?

  • If the end user is operating them, such as a business that provides caching for web sites viewed by users of its office network, the business can run an HTTPS caching proxy that uses a self-signed certificate, and everyone behind the firewall can install the business's root certificate.
  • If the operator of the web site is operating them, the caching load balancer can implement all SSL, and the web servers can communicate with the proxy through HTTP.
  • If an intermediate ISP is operating them, I thought hiding information from snooping ISPs and making sure that the ISP hasn't modified the information in transit was the entire point of HTTPS. Might there be a need for a subset of HTTPS that provides integrity without confidentiality?

Oh, you thought only browser caches mattered.

That's exactly what I thought, given the whole goal of SSL.

Re:Who is operating these caching proxies? (0)

Anonymous Coward | about a year and a half ago | (#42078175)

Oh, you thought only browser caches mattered.

That's exactly what I thought, given the whole goal of SSL.

It doesn't matter what the "whole goal of SSL" is, the FACT is that if you make intermediate caches unusable then there's more load on the origin servers.

Re:Who is operating these caching proxies? (1)

MikeBabcock (65886) | about a year and a half ago | (#42083197)

Every company I do border gateway IT for has a border cache that filters all Internet access for the client. All connections to Internet servers go through it and reduce overall Internet bandwidth requirements, speed up actual content delivery for content accessed by multiple users (Windows Updates are a huge win) and allow the disallowing of specific sites or URIs by policy controls.

I also operate the same thing at home because well, I'm good at it and its worth it for the PS3+PC+Laptop+Tablet+SmartPhones+3DS all accessing the Internet.

They're also sometimes implemented (although now more rarely) by ISPs to reduce their bandwidth needs to service clients to frequently visited websites.

cf. WPAD

Run an internal CA for your border cache (1)

tepples (727027) | about a year ago | (#42087731)

If the end user is operating them, such as a business that provides caching for web sites viewed by users of its office network, the business can run an HTTPS caching proxy that uses a self-signed certificate, and everyone behind the firewall can install the business's root certificate.

Every company I do border gateway IT for has a border cache that filters all Internet access for the client.

That's what I was talking about. I don't see why you couldn't just run your own internal CA on a home or office network and use that as part of a man in the middle that caches HTTPS communications.

PS3+PC+Laptop+Tablet+SmartPhones+3DS

Perhaps the problem with running an internal CA happens with devices onto which only the manufacturer, not the device's owner, can install SSL root certificates. Is this the case with the Sony, Nintendo, and perhaps Apple devices that you mentioned?

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42074181)

No, mixed web sites were never recommended and many browsers will give a "mixed content" warning. The overhead isn't that high, Google commented after its switch to https only for gmail: [techie-buzz.com]

all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

I did some benchmarks and switching to all SSL was a huge load on CPUs, to the point where we had to consider encryption accelerator hardware ... ... in 1996.

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42074339)

If you look, google uses some odd algorithm preferences, preferring 128bit RC4 instead of the defaults for most webservers (Apache, NGINX etc) which prefer the strongest ciphers first, like 256bit AES or Camellia

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42074473)

>Obvoiusly not mixing SSL and non on any single page like the article mentions
Mixed content warnings are specifically what he said he was aware you shouldn't do

Having an SSL login page but the rest of the site unencrypted is acceptable for most websites.

Re:Server Load (3, Interesting)

MikeBabcock (65886) | about a year and a half ago | (#42074791)

HTTPS-only is a hack from a lack of foresight and breaks caching.

What we need is a signature-only system for content that isn't private. There's no reason to encrypt the front page images on CNN to each user, but signing them so they are provably from CNN is valuable.

Re:Server Load (4, Insightful)

lgw (121541) | about a year and a half ago | (#42075669)

HTTPS-only is a hack from a lack of foresight and breaks caching.

What we need is a signature-only system for content that isn't private.There's no reason to encrypt the front page images on CNN to each user, but signing them so they are provably from CNN is valuable.

More myths from the 90s - wrong on both counts. Privacy always matters. Maybe you live in a country where browing CNN won't land you in jail, but others aren't so lucky. And the only one who can't cache HTTPS traffic is the man-in-the-middle, which is sort of the point, really. Server-side there's plenty of hardware solutions to caching these days, it's just a question of where you terminate SSL. Client-side there's plenty of solutions as well, if you're running a home or office network and your users are willing to trust your cert (and thereby allow you to snoop).

Re:Server Load (1)

Rigrig (922033) | about a year and a half ago | (#42075973)

Maybe you live in a country where browing CNN won't land you in jail, but others aren't so lucky.

"Honestly, I wasn't browsing at all, just making random https connections to cnn.com"...

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42076037)

HTTPS is a fundamental building block of things like TOR (or even just "VPN out of the country"). If the attacker can tap the line to the client, you need to be very sure the server isn't also a fake set up by the attacker, or nothing in between will help.

Re:Server Load (1)

MikeBabcock (65886) | about a year and a half ago | (#42083227)

More myths from the paranoid.

As someone who deals with security on a regular basis, I know that SSL doesn't fix any of the problems you're mentioning. In fact, my point is only invalidated in situations where the content itself is private to the user. Connection tracking negates your points entirely. Assuming any middle-man knowledge, I can already determine what sites you visit. Assuming any level of police state, I can just put a keyboard monitor on your USB input or a pinhole video monitor through your wall if I care.

SSL (especially without client-side certificates, which is 99.9% of the web) is just there to keep honest people out of your data.

Re:Server Load (0)

Anonymous Coward | about a year and a half ago | (#42073431)

Isn't the point of mixed web sites to lessen server load from https?

Others below have taken care of the mixed point (re: don't) - so let me handle the server load part:

What server load?

Any decent load balancer is going to have hardware offloading of SSL. Granted, this won't help for www.lookatthissiteaboutmycat.com and its $20/month "hosting plan", but hell, they probably couldn't afford an SSL cert in the first place. :p

Re:Server Load (1)

icebraining (1313345) | about a year and a half ago | (#42073547)

Why not? A cheap hosting provider is probably serving hundreds or thousands of sites from a single box, it makes sense to invest in an hardware solution that can be shared between all of them.

And you can get free certs, as long as you don't need extra validation.

Server Name Indication (3, Informative)

tepples (727027) | about a year and a half ago | (#42073835)

And you can get free certs, as long as you don't need extra validation.

It's not the SSL certificate that's the cost bottleneck for the smallest sites; it's the dedicated IPv4 address. A lot of the cheapest use name-based virtual hosting, which requires the Server Name Indication (SNI) extension to SSL. Without SNI, the client can't see any certificate but the first on port 443 of a given address, which means the user will see a serious certificate error most of the time. Popular web browsers that lack SNI support include Internet Explorer for Windows XP, for which Microsoft is providing extended support until April 2014, and Android Browser on Android 2.x, which is on millions of Android phones and many inexpensive Android tablets.

The problem as I see it... (0)

Anonymous Coward | about a year and a half ago | (#42073233)

I kind of liked this state of most sites having no HTTPS version. It made breaking HTTPS much less of a concern/priority. Thus, if you had HTTPS on your site, you would feel much more secure and have the added benefit of feeling smugly better than most other people (no, I'm not kidding). Now, with it becoming "standard", it's in the spotlight, and there's every reason for the bad guys to break it.

Whenever *anything* becomes popular -- whatever it is -- it seems to be ruined. It's such a sad fact.

Re:The problem as I see it... (1)

Chrisq (894406) | about a year and a half ago | (#42073273)

Whenever *anything* becomes popular -- whatever it is -- it seems to be ruined. It's such a sad fact.

In this case I think that the only practical way to break it would be to break TLS. This is something that many people have tried and with the current version failed (though you should avoid TLS 1.0 [theregister.co.uk] .

I dunno (1)

marcosdumay (620877) | about a year and a half ago | (#42073341)

Does breaking the PKI consist of break TLS?

Also, with all the recent attacks against the PKI, I'm wondering how the you and the GP are concerned about people suddenly get interest in breaking HTTPS.

Re:I dunno (1)

Chrisq (894406) | about a year and a half ago | (#42073511)

Does breaking the PKI consist of break TLS?

What "breaking of PKI" are you referring to? If you mean certificates generated with non-random keys [theverge.com] then this does not break TLS itself - though of course connections using weak certificates could be compromised. Ditto to certificates issues with short keys [eeye.com] . The compromised CAs [slashdot.org] then this could be seen as a weakness in the whole idea of centralised trusted CAs. While I like the idea of decentralised CAs [ieee.org] but think that it is not something to be rushed in to.

Re:I dunno (1)

marcosdumay (620877) | about a year and a half ago | (#42074577)

By "breaking the PKI" I mean basicaly compromissing certificates (directly or by compromissing CAs). So, yes, I'm refering to all that.

Those are not an attack on TLS itself, but for most effects, they are enough alike.

Re:I dunno (2)

lgw (121541) | about a year and a half ago | (#42075773)

Here's the thing about crypto: it's never the math that's vulberable in the wild, it's the key distribution. That's always been the fundamental weakness of HTTPS: to easy to get a fake cert that will fool enough users. Heck, one attacker got a real Bank of America Cert by simply asking for a cert by email with forged headers.

CAs are the weak link. It doesn't matter whether the attack breaks whatever you consider "TLS itself": a bank vault with the worlds strongest door, but no sides or back is still not secure.

Worse, there's no obvious way to do CAs right. Decentralized thumbprint matching against certs issued by the current CAs seems the most practical approach for now: not perfect, but harder to subvert two very different systems in parallel.

Re:The problem as I see it... (1)

petermgreen (876956) | about a year and a half ago | (#42073945)

SSL is "broken" not by any flaw in the protocol itself but by the flawed trust model used.

Go through mozilla's list of trusted root certificates and realise that every organisation on that list has the power to MITM you without generating a certificate warning (and the power to delegate that power to "intermediate certs" owned by third parties) . Then ask youself whether you really trust every organisation on that list not to MITM you and not to delegate signing powers to an organisation you do not trust.

You can probablly trust SSL to keep most low level criminals out but if something really needs to be kept secret you should be managing keys yourself, ideally through face to face interactions.

Re:The problem as I see it... (1)

amorsen (7485) | about a year and a half ago | (#42076535)

The trust model would have worked a lot better if TLS had allowed multiple certificates of the same key. Major sites would make sure to use several different companies, and browser users could easily dump lousy TLS providers from the trusted roots without losing access to anything important. That would result in actual competition between TLS providers.

HTTPS is broken on many websites (0)

Anonymous Coward | about a year and a half ago | (#42073243)

I'm using the Chrome HTTPSEverywhere extension and I find that many sites are broken. The HTML loads fine, but the CSS and JS files fail on many sites, including the one linked in the article (www.computerworld.com.au).

Re:HTTPS is broken on many websites (1)

ManuelH (1303433) | about a year and a half ago | (#42073441)

Starting from many Google services. I need to deactivate forced HTTPS in iGoogle because RSS feeds don't work.

SSL (4, Insightful)

FriendlyLurker (50431) | about a year and a half ago | (#42073253)

Now, just gotta get SSL certificate system... secure and working.

Re:SSL (1)

drinkypoo (153816) | about a year and a half ago | (#42073265)

Now, just gotta get SSL certificate system... secure and working.

That suggests the question, are there any CAs which have never [knowingly] been compromised?

Re:SSL (2)

derecerca (2780271) | about a year and a half ago | (#42073457)

That suggests the question, are there any CAs which have never [knowingly] been compromised?

Yes, my self-signed CA has never been compromised. Must disclose that has never been connected to the internet

Re:SSL (0)

Anonymous Coward | about a year and a half ago | (#42073979)

Apart from the never-connected part, that's like saying your circular reasoning never is wrong.

That's not right. That's not even wrong.

Re:SSL (0)

Anonymous Coward | about a year and a half ago | (#42073517)

Define compromised. Do you only count those who have had intrusions into their networks, or also those who have given out certificates to people you don't trust? Sure, THEY may trust the people they give the certificates to, but if that made any difference, we might as well let a drug cartel run the CA infrastructure. They are very strict about who THEY trust, but I still don't trust any of them. Why they trust is useless, and in the real world depends on who pays the money.

For example, my bank offers online banking. Part of the login for said online banking is hosted on a .NU domain, a completely different country, which I don't even know the exact location of. How did a banking company in my country get a certificate for a domain in a completely different country? If I were to take the certificate literally, it's saying that Verisign (or whoever) have confirmed that I really am about to give my banking information to a phishing site in .NU - and of course I won't do that. So if I want online banking, I need to trust that the certificate is lying and this is really not a phishing site, AND trust that the site is really the site of the company handling the bank login.

Of course, personally, I don't trust a company that think getting a domain name in a completely different country for their secure stuff is a great idea, but that means I cannot do online banking, as all banks in the country are using the same company for their login (that company is owned by the banks, so they all have interest in forcing everybody into that login solution).

.nu (2)

tepples (727027) | about a year and a half ago | (#42073879)

For example, my bank offers online banking. Part of the login for said online banking is hosted on a .NU domain, a completely different country, which I don't even know the exact location of.

The IUSN Foundation of Niue, a tiny island country in the South Pacific, operates .NU as if it were a generic top-level domain like .COM. It has been popular in northern Europe, as "nu" means "now" in Swedish, Danish, and Dutch. (It also means "naked" in French.) McAfee reports that there was a time in 2007 when .nu was a hotbed of browser exploits, but it had been cleaned up by 2008. (source [wikipedia.org] )

Re:SSL (1)

petermgreen (876956) | about a year and a half ago | (#42073965)

How did a banking company in my country get a certificate for a domain in a completely different country?

Same way they get a certificate for any other domain they legitimately own.

Just because a domain is in a foreign TLD doesn't mean they don't legitimately own it .

Re:SSL (1)

drinkypoo (153816) | about a year and a half ago | (#42074287)

Define compromised. Do you only count those who have had intrusions into their networks, or also those who have given out certificates to people you don't trust?

I will (well, would) be happy if certs are (were) guaranteed to belong to someone who has received mail at a physical address. I don't care who they are, or where that is. I can decide what I think about it and accept the cert or not.

Re:SSL (1)

dog77 (1005249) | about a year and a half ago | (#42077303)

That seems like a good idea to me. And when view the certificate in your browser, the browser should be able to connect to the certificate authority, and you should be able to get a bio of the certificate, check if is revoked, and write and view complaints on certificate.

Re:SSL (1)

TheLink (130905) | about a year and a half ago | (#42075993)

Unfortunately that's not relevant for security given the way most browsers behave by default. All the attacker needs is ONE compromised/cooperative CA out of the dozens of CAs your browser recognizes or will recognize.

If you visit China and CNNIC decides to sign a *.yourbank.com certificate and MITM you, your browser wouldn't warn you. It'll show you the usual "secure" icons etc.

If you want a warning you can use firefox and certificate patrol. Or try Chrome's certificate pinning feature (not sure about the details). No idea about Opera's equivalent.

Re:SSL (2)

buchner.johannes (1139593) | about a year and a half ago | (#42073469)

What about all those SSL/TLS attacaks that came out a year ago, injection renegotiations etc.? Is SSL/TLS suddenly considered safe again? I thought they discovered serious issues in the concept.

Re:SSL (1)

Anonymous Coward | about a year and a half ago | (#42073521)

Also, as more and more mundsne activities begin to go through SSL, I suspect it will provide more and more opportunity for users to grow accustomed to blindly accepting any and all invalid certificates, because they become conditioned to understand that the certificate warning is just a minor annoyance to dismiss as quickly as possible. (Even many theoretically "secure" websites that deal with actual sensitive information like credit card numbers are already conditioning people to accept invalid certificates. It can only get worse if the general trend is for everyone to start using SSL for everything.) This means they will also blindly accept invalid certificates in the rare case that an attack is actually occurring.

Re:SSL (1)

Anonymous Coward | about a year and a half ago | (#42073767)

As long as it is based on the deeply logically flawed concept of "authority" [wikipedia.org] , it can by definition never ever work.

Trust is a personal, individual thing. There can never ever be a thing that can be put into place as a global entity that everyone trusts. You can only decide who to trust *yourself*. You can never offload that to somebody else. Ever. I'm sorry, but there's a limit to how lazy of an ass one can be, before it starts to hurt.

So what we need is a web of trust system, with trust factors for every link in the chain. E.g. I trust my girlfriend 0.85 (range = {0..1]) right now, and she might trust her mom 0.75, and her mom might trust her new man 0.96, so if that guy has a website, it would be 0.612 (61.2%) trustworthy. And I of course had to decide, what level would still be enough.
It is important to note that only individuals can be part of the chain/web. Never ever companies (especially not banks), websites, or anything else that is not an individual (like a FOX News watcher ;)).

As long as there are "CA"s, it's broken and designed by people who are idiots and complete nerds at the same time, for people who are complete idiots and blind believers (usually the same thing).

Re:SSL (0)

Anonymous Coward | about a year and a half ago | (#42073937)

By running Microsoft Windows or Debian GNU/Linux or whatever, you are placing your trust in the suppliers of that software. Most people place their trust in their OS vendor without even thinking about it. It seems fair that the same people who blindly trust the software they run should be able to blindly trust some authority on SSL certificates. If your average Joe or average Jill is already too stupid to avoid installing malware on their computer, how can you possibly expect them to make responsible trust decisions in this trust web system? The answer is, you can't. A central authority actually works better, because most people are stupid.

Re:SSL (1)

dog77 (1005249) | about a year and a half ago | (#42077559)

Parent is right in that we are ultimately at the mercy of our browser, operating system, and the individuals and tools that built this software. On the other hand I think grandparent is correct as much as it would be a good idea to spread the trust and also a good idea to have an audit of the certificate authority and its certificates. Just like when you purchase a product, you see what other individuals and organizations say about that product before buying it. The same should be of certificates and the organizations they are issued to. Also, I would prefer certificates that are signed by multiple CAs (with good reputation) over just a single CA.

Re:SSL (1)

dissy (172727) | about a year and a half ago | (#42074207)

Free Class-1 SSL certificates are available from StartSSL
https://www.startssl.com/ [startssl.com]

Class-1 does not show the "Super secure secret key" icons with organization name because they are only email-verified, and you must used a personal name, but for small personal "hobby" websites they are still a lot better than a self signed certificate.

Class-2 certs are what supposedly need "verified", and show all the high security flags.
In practice however this verification is typically lacking depending on which cert authority you go with.

As we all know, the chain of trust method currently in place has lots of problems, especially so with how self-signed certificates are handled in most web browsers.
It is quite pitiful how a non-SSL website is shown as more secure than a self-signed certificate :/

They really need to change that, showing non-SSL as the bottom level, with self-signed certs one step above as "encrypted but not verified or authenticated with anyone", and then the class level certs above that. I suspect however this is due to pressure from the certificate authorities themselves, and since money is involved it will not be changed any time soon.

Re:SSL (1)

dog77 (1005249) | about a year and a half ago | (#42076817)

Verification of SSL server certificate is not enough to protect your account. There needs to be additional 2 way authentication, so both sides can prove they know the username password/key to the account. So if the certificate does get compromised, you will still be protected from man in the middle. Here is one such protocol: http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol [wikipedia.org]

Accessible only over HTTPS? Really? (0)

Anonymous Coward | about a year and a half ago | (#42073267)

You need a working group & policy for this?

Just configure your web server to redirect any incoming http traffic with a 302 error code to the corresponding https connection.

Not that difficult. The website for the brokerage firm that has my account has been doing this for at least 3 years.

Re:Accessible only over HTTPS? Really? (1)

marcosdumay (620877) | about a year and a half ago | (#42073333)

Why not 301? The 302 status is for when you may change your head later, so the client better keep using the same address again and again. With 301 it should only use the new addred from now on.

Re:Accessible only over HTTPS? Really? (0)

Anonymous Coward | about a year and a half ago | (#42073401)

Why not 301? The 302 status is for when you may change your head later, so the client better keep using the same address again and again. With 301 it should only use the new addred from now on.

Ok, fine, 301 is a better status code for this kind of redirection.

But the point still stands - it has been easy to do this kind for thing from the server-side for years.

Re:Accessible only over HTTPS? Really? (1)

marcosdumay (620877) | about a year and a half ago | (#42074587)

Yes, I agree.

It was a honest question. I wanted to kow if I was overlooking something,

Re:Accessible only over HTTPS? Really? (5, Informative)

Anonymous Coward | about a year and a half ago | (#42073391)

Sure. Not that difficult. Then all the hacker has to do is spoof your site on HTTP and hope people don't notice the address bar isn't green. A number of people will fall for that one.

With HSTS, your brokerage will keep doing the redirect for non-HSTS browsers and for people who are visiting the site the first time. But once they've connected, the browser will note that it's a HSTS site. So next time it'll do the redirect in the browser, where a hacker can't interfere with it, and just do the secure HTTPS connection to the site.

HSTS also makes it impossible for people to click through security warnings, if a hacker is spoofing the HTTPS site with a forged (self-signed) certificate.

Re:Accessible only over HTTPS? Really? (0)

Anonymous Coward | about a year and a half ago | (#42073397)

You should RTFA, especially the part that explains SSL stripping, and why your approach is insecure. Oh, and "big company X is doing this" doesn't improve the security of a broken approach, it just makes the consequences worse.

Re:Accessible only over HTTPS? Really? (0)

Anonymous Coward | about a year and a half ago | (#42077599)

Aside from what the other responses have said, that also doesn't help if the browser already sent sensitive information in the clear as part of the request.

IYuo Fail I7? (-1)

Anonymous Coward | about a year and a half ago | (#42073413)

are just way o2ver

I'm still not getting it (1)

squiggleslash (241428) | about a year and a half ago | (#42073421)

What's the difference between using this protocol and, uh, just disabling HTTP on your webserver? Or, from a user standpoint, just making sure you're using HTTPS via the URL?

Re:I'm still not getting it (0)

Anonymous Coward | about a year and a half ago | (#42073503)

> disabling HTTP on your webserver?

that users will automatically be redirected to the HTTPS version of your site instead of getting an error. Moreover, *any* solution that comes from your server can be circumvented by spoofing the response (see SSL stripping).

> just making sure you're using HTTPS via the URL?

that you can fix the issue server-side and not rely on individual users to fix it client-side (which they won't do).

Re:I'm still not getting it (5, Informative)

heypete (60671) | about a year and a half ago | (#42073523)

What's the difference between using this protocol and, uh, just disabling HTTP on your webserver? Or, from a user standpoint, just making sure you're using HTTPS via the URL?

Disabling HTTP can break things for users who manually enter URLs and forget the "https" or any number of other bad things. It's usually good form for a secure site to also run a plain-http server that redirects users to the secure site to avoid such confusion.

Only problem: ssl stripping. If a bad guy can intercept the connection between you and the secure site before the security has been negotiated then they can connect to the secure site in the normal way and present that page to you sans HTTPS and intercept anything you do there.

In short: browsers don't remember when a site "used to be secure but isn't today" and so don't present any warnings. This method tells the browser "For the next [time interval] you should only connect to me using a secure protocol. If not, the connection should fail." -- all that's required is that the user connect to the secure site at least once (e.g. from home or some other trusted network) to have the HSTS flag set for that site. If they try going to the coffee shop or some other place where there's a bad guy attempting ssl stripping then the connection will fail.

Re:I'm still not getting it (2)

Lord Grey (463613) | about a year and a half ago | (#42073845)

It's been a very long time since I've messed with web technologies at this level, so I'm tossing the following out merely for discussion purposes: What about changing the default browser to behavior so that instead of first trying the http: prefix, browsers try https: instead and then fall back to http: only when necessary? Would that work around the 'ssl stripping' issue?

Re:I'm still not getting it (1)

lgw (121541) | about a year and a half ago | (#42075961)

That wouldn't help if someone in the middle is blocking the https site. You really need a way to tell the browser to not try the http site at all, even as a fall back. "Force fall back to insecure legacy mode" is a very common form of attack these days.

Re:I'm still not getting it (1)

DarkOx (621550) | about a year and a half ago | (#42073589)

Because this lets the browser know and it remembers. So when I come along and spoof the MAC address of your gateway, and route the traffic to my own web server. I also need to run HTTPS or you will get a warning.

Additionally I am also going to need a certificate that your system will see as trusted and valid for the name you requested; which fortuitously remains at least a little hard in the typical case.

Without this I could pretty much count on you just typing amazon.com, rather than https://amazon.com/ [amazon.com] your browser would most likely do http first and if you are really lucky try https first but still fall back to http when I don't answer on 443. So anyone who could manipulate your DNS or upstream routing could make you a victim of site cloning, or man-in-the-middle pretty easily. With this the hole is at least partly closed, in that if you have been to the site before your browser knows if its not https something is wrong.

How to easily add HTTPS to a website? (2)

ArcadeMan (2766669) | about a year and a half ago | (#42073459)

I can get the security side of things, but how do you do that easily and with zero budget? What about a personal website? I can't afford an SSL certificate for that.

Is there any "SSL/HTTPS For Dummies With No Cash" manual somewhere, keeping in mind that most people with websites are code monkeys, not network administrators.

Re:How to easily add HTTPS to a website? (0)

Anonymous Coward | about a year and a half ago | (#42073611)

startssl.com provides free SSL certificates. (just for HTTPS, not free for code signing)

Integrating SSL for the first time in any webserver takes a bit of fiddling, but google should guide you through it.

Re:How to easily add HTTPS to a website? (3, Interesting)

icebraining (1313345) | about a year and a half ago | (#42073613)

SSL certificates are not the problem: https://cert.startcom.org/ [startcom.org]

The problem is that some browsers (mainly IE on XP) don't support SNI, so your website needs a dedicated IPv4.

If you manage the machine, you can get a VPS with a dedicated IP for almost nothing (I pay $3/month), but managed web hosting is another issue.

Re:How to easily add HTTPS to a website? (1)

heypete (60671) | about a year and a half ago | (#42073645)

FYI: the https://cert.startcom.org/ [startcom.org] site is, as far as I know, somewhat deprecated. The more up-to-date URL is https://www.startssl.com/ [startssl.com]

Re:How to easily add HTTPS to a website? (0)

Anonymous Coward | about a year and a half ago | (#42074589)

SSL certificates are not the problem: https://cert.startcom.org/ [startcom.org]

The problem is that some browsers (mainly IE on XP) don't support SNI, so your website needs a dedicated IPv4.

If you manage the machine, you can get a VPS with a dedicated IP for almost nothing (I pay $3/month), but managed web hosting is another issue.

Technically you only need a different listener/port... so for example https://example.com:443 and https://example.org:12345 would work fine as long as :443 only presented the example.org cert and 443 the example.com one. We just don't want to have to use non-default ports do we...

Re:How to easily add HTTPS to a website? (4, Informative)

heypete (60671) | about a year and a half ago | (#42073623)

I can get the security side of things, but how do you do that easily and with zero budget? What about a personal website? I can't afford an SSL certificate for that.

NameCheap. sells Comodo and GeoTrust domain-validated SSL certs for ~$8-$10/year. Thawte certs are $30. Those are well within an "essentially nil" budget range for even the smallest of businesses.

StartSSL.com has domain-validated certs for free. Additional validation and features (like wildcards) are available at nominal cost.

All of the above-mentioned certs are widely trusted by browsers, both on computers and mobile devices.

Certificate costs haven't been an issue for several years now. The days of needing to get VeriSign certs at outrageous prices are gone (though VeriSign still charges outrageous prices, naturally).

Is there any "SSL/HTTPS For Dummies With No Cash" manual somewhere, keeping in mind that most people with websites are code monkeys, not network administrators.

Enabling SSL/TLS for your web server usually requires the addition of a few lines in a configuration file that tell the server (a) to use SSL and (b) the location of the server's private key, public key, and any intermediate certificates from the certificate authority. The details vary based on your server software, but it's usually quite easy and instructions can be found on Google. The steps are basically:
1. Generate an RSA public key (usually 2048 bits, though 4096 is not uncommon. 1024 bits is deprecated.).
2. Create a certificate signing request (CSR) for your site using that private key.
3. Submit the CSR to the certificate authority for signing.
4. Complete whatever verification process the CA requires (for domain-validated certs this usually requires that you click a link sent to the email address listed in your domain's whois record, while high-validation-level certs may involve you sending the CA various documents).
5. One you are verified, the CA signs your CSR and sends you the signed certificate. In many cases they also direct you to download the required intermediate certificate that you'll also need.
6. You save the private key (readable to root only, of course), signed certificate, and the intermediate certificate to your server and configure your server software appropriately (usually only a few lines of configuration changes).

At present, most HTTPS sites should have their own unique IP address, which rules out most "personal" hosting. This is because Internet Explorer on Windows XP (still a substantial chunk of users) does not handle HTTPS-enabled virtualhosts. Pretty much any other browser on any other system does support it.

Re:How to easily add HTTPS to a website? (1)

ArcadeMan (2766669) | about a year and a half ago | (#42074325)

Thanks for all the details, however my website is hosted on a shared plan, meaning I practically zero control over the server setup. Is it a lost cause to try adding HTTPS? The host is Funio.com, if that's any help.

Re:How to easily add HTTPS to a website? (1)

corychristison (951993) | about a year and a half ago | (#42075837)

Funio uses Panelbox Control Panel... they searchable Knowledgebase, here is a quick search for "SSL Certificate" : https://kb.funio.com/search?search=ssl+certificate [funio.com]

Re:How to easily add HTTPS to a website? (1)

ArcadeMan (2766669) | about a year and a half ago | (#42075957)

Thank you for the link.

Re:How to easily add HTTPS to a website? (1)

DarkOx (621550) | about a year and a half ago | (#42073697)

I would advocate not using HTTPs unless you have something to secure. That might be as simple as login if you have forum or something but if its all just non-interactive public information send it in the clear.

Do some googling there are some public CAs that will issue minimally verified certificates for personal sites free for at least the first year, so that might be an option. Otherwise if the user community is small enough you can use a self signed certificate. You'll need to contact them out of band transmit you certificate out of band and get them to install it. I know one person who managed to score a couple thousand 8Mb usb sticks for like $25 watch ebay. He has a local business that delivers paper products and janitorial supplies. His customers that want to re-order over the web are given one with the cert on it, they install it on their machine and it all works.

This is actually more secure than Verisign and the like but obviously it only works when you have an exisiting relationship and won't scale. Those are really your only options.

Re:How to easily add HTTPS to a website? (1)

ewieling (90662) | about a year and a half ago | (#42074199)

Why do you want to tell a potential attacker which data you consider important enough to secure with HTTPS?

Re:How to easily add HTTPS to a website? (1)

The Bean (23214) | about a year and a half ago | (#42075031)

You gotta be kidding, if one of my vendor's gave me their root certificate to install on my machine, so I could securely connect to their site, I'd tell them to take a flying leap and get a real certificate. If I'm understanding right, your friend now has the ability to MITM his customers' SSL connections. We can argue about whether the root certificates preinstalled can be trusted, but I'm confident they're safer than the local Dunder Mifflin.

Re:How to easily add HTTPS to a website? (0)

Anonymous Coward | about a year and a half ago | (#42073929)

cacert.org is great for this, but it requires your clients to have the CACert root certificates. For commercial venues, SSL certificates are $10/year, so use that.

This should never have been needed (2, Insightful)

Skapare (16644) | about a year and a half ago | (#42073681)

This simple logic that when any SECURE page is requested then EVERYTHING must be accessed in secure mode (valid certificate required of every part if the main requested page has a valid certificate) should have been in there right from the beginning. So many of our security problems exists because people just DON'T THINK right at the beginning AND it takes so damn fscking long for the process to fix their stupidity.

Oh dear, another armchair expert (1)

Viol8 (599362) | about a year and a half ago | (#42074101)

Tell me , if all this is so obvious, how come you didn't design it?

Besides which , when https was designed it was a pretty hefty burden on the server CPUs of the day so it was logical that only the parts that needed security would actually use https. Unfortunately hindsight always does have 20/20 vision and people who pretend its their own insight are usually full of it.

Re:This should never have been needed (1)

petermgreen (876956) | about a year and a half ago | (#42074187)

I'm not sure you get what the problem is, the problem is that end users often don't "request secure pages" explicitly, they either type a plain domain name with no protocol or they follow a link from another site (possiblly a search engine) so the initial request for a session is often over plain http.

If the users connection is not subject to MITM then they get redirected to the secure site but if a MITM is present then the MITM can make sure that doesn't happen by rewriting any links or redirects that point to https sites back to plain http and then talking to the user over plain http while talking to the server over https.

This attempts to reduce the risk by making a sticky "this website must be secure flag", if the user always uses a compromised connection it won't help but in the more common case where a user uses an uncompromised connection most of the time and then occasionally uses compromised ones (think a laptop user that moves between many public wifi hotspots).

HTTPS is Great (1)

epSos-de (2741969) | about a year and a half ago | (#42077617)

HTTPS is great, if you can afford to pay the fees. I see tons of potential for redirect 301 to HTTPS, becasue it will screw the existing links on the Internet and the unaware businesses would love to pay for maintenance of their broken websites.

RFC 6797 is not an "Internet Standard" (1)

Anonymous Coward | about a year and a half ago | (#42080725)

It's a proposed standard.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...