×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

SSL and the Future of Authenticity

Soulskill posted about 3 years ago | from the laying-off-alice-and-bob dept.

Security 98

An anonymous reader writes "There has been a growing tide of support for replacing SSL's Certificate Authorities with an alternative authentication mechanism. Moxie Marlinspike, the security researcher who has repeatedly published attacks against SSL, has written an in-depth piece about the questions we should be asking as we move forward, and urges strong caution about adopting DNSSEC for this task."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

98 comments

The CAKE protocol (3, Interesting)

Omnifarious (11933) | about 3 years ago | (#35784282)

This, Diaspora, and personal interest by friends have gotten me interested in working on The CAKE Protocol [cakem.net] again. My goal is a Python reference implementation that can speak over TCP, email, and possibly IM.

Last time I stalled out once I got a job. I also realized that the protocol design was flawed, and the API design for the internals was awkward. Also, I was all alone in a new city. I have friends who are interested now, which makes it easier. And maker spaces with people to talk to. When you have to work on something all by yourself it's hard to stay motivated.

Re:The CAKE protocol (1)

Omnifarious (11933) | about 3 years ago | (#35784302)

Also, I see this as a chance to overhaul HTTP and a couple of other protocols so they have stronger authentication and easily implemented distributed caching.

Re:The CAKE protocol (1)

thePowerOfGrayskull (905905) | about 3 years ago | (#35784452)

But if the protocol is still flawed, does the rest really help you?

Re:The CAKE protocol (2)

Omnifarious (11933) | about 3 years ago | (#35785016)

I've revised the protocol spec, and I think version 2 is fairly good. It still has a problem with liveness, but this is due to the fact the protocol is very round-trip avoidant, and there are some mitigating strategies that can be adopted. Another problem is the lack of forward secrecy due to not using Diffie-Hellman for key negotiation and instead directly encrypting the keys with RSA. That can also be fixed in a later version without altering the protocol so significantly that users will have to modify their code.

One last flaw is that I'm not positive a 256-bit namespace is quite large enough to handle all names that will ever be generated given that the birthday paradox effectively halves this to 128-bits. But I think that's pretty minor.

Re:The CAKE protocol (2)

cjb658 (1235986) | about 3 years ago | (#35786008)

No matter how good SSL is, you can always just remove it with a tool like sslstrip (also developed by Moxie Marlinspike). I guess you could have banks and other web sites instruct the user to look for the lock icon, but like Moxie said in his Defcon talk, he tested it on hundreds of users by running a Tor exit node, and every one of them still logged in after being sent an unencrypted login page.

Re:The CAKE protocol (1)

Omnifarious (11933) | about 3 years ago | (#35786072)

No matter how good SSL is, you can always just remove it with a tool like sslstrip (also developed by Moxie Marlinspike). I guess you could have banks and other web sites instruct the user to look for the lock icon, but like Moxie said in his Defcon talk, he tested it on hundreds of users by running a Tor exit node, and every one of them still logged in after being sent an unencrypted login page.

I don't understand how your comment is relevant to what I said. I'm not talking about SSL. I'm talking about the CAKE protocol [cakem.net] .

Re:The CAKE protocol (1)

zippthorne (748122) | about 3 years ago | (#35787680)

Were they apple users? The lock on safari is ridiculously small and unobtrusive. It's nice that it's unobtrusive, but it would be better if it was a *little* more visible. And also, it would be nice if you could set it up to give an alert when the certificate changes, even if it's a "legitimate" change..

I really do not like having to tell my mom, "Yeah it's pretty secure. Just look for the lock icon, and click it and review the certificate and make sure it points to the website you think you're accessing, the version is >3, and everything else looks ok..."

Because guess what. After that she's going to go back to just giving out all kinds of personal information over the telephone instead.

GUIDs are 128 bits (1)

tepples (727027) | about 3 years ago | (#35789332)

One last flaw is that I'm not positive a 256-bit namespace is quite large enough to handle all names that will ever be generated given that the birthday paradox effectively halves this to 128-bits

GUIDs are 128 bits. Have GUID collisions ever been a problem?

Re:GUIDs are 128 bits (1)

maxwell demon (590494) | about 3 years ago | (#35790648)

Also, IPv6 uses 128 bit addresses. So unless you expect more than one actual name per possible IPv6 address, this should be completely sufficient.

Re:GUIDs are 128 bits (1)

Omnifarious (11933) | about 3 years ago | (#35812670)

I'm thinking one name per gram of matter in the solar system with each getting a new key every 30 years or so. Would the whole thing scale up to a matryoshka brain without breaking? So yeah, I'm thinking really big and being a bit silly.

Re:GUIDs are 128 bits (1)

Omnifarious (11933) | about 3 years ago | (#35794124)

Yeah, I'm thinking really big and being a bit silly. I'm thinking one name per gram of matter in the solar system. Would the whole thing scale up to a matryoshka brain without breaking?

Re:The CAKE protocol (0)

Anonymous Coward | about 3 years ago | (#35785174)

And maker spaces with people to talk to.

There's a term I hope does not catch on before I have time to duck my head under the lawnmower.

Re:The CAKE protocol (1)

PopeRatzo (965947) | about 3 years ago | (#35785466)

Good luck. It's a worthy way to spend your energy and time.

Where can I donate to support this project?

Re:The CAKE protocol (1)

Omnifarious (11933) | about 3 years ago | (#35785966)

There's a PayPal donate link on the front page. I just updated it to be better than it was before, though it was working before.

THE MESSAGE IS CLEAR (2, Funny)

Anonymous Coward | about 3 years ago | (#35784300)

Texting one time pads is the ONLY SOLUTION.

Re:THE MESSAGE IS CLEAR (2)

Roogna (9643) | about 3 years ago | (#35784726)

I realize the comment was posted as a joke of sorts, but on the sense of taking it seriously... say you've got someone with AT&T based Internet access and AT&T cell access. Then you're trusting the phone provider to not intercept and replace the one time pad with their own in a MITM attack. While texted messages is probably a fair amount of security for say, your WoW account, I wouldn't consider it secure for the communications that really matter. As the article says, when SSL was developed a lot of the ideas behind secure Internet communications were still being figured out on a basic level. Without the knowledge of just how sophisticated attacks were going to become. Anything designed today needs to consider just how sophisticated attacks are likely to be in 10 years. The backbone providers themselves have frequently in recent years proven highly untrustworthy. In the future communications will need to be at least verifiable, if not safely encrypted, even despite governments and major corporations controlling the wire.

Re:THE MESSAGE IS CLEAR (0)

Anonymous Coward | about 3 years ago | (#35785890)

The largest German private customer bank, Deutsche Postbank, disagrees with you. (For the record, I'm not saying you're the stupid one here.) They just forcibly switched everybody over from a paper list of one time transaction keys that the customer has physical control over... to SMS. People without a mobile phone have to buy an "unhackable" device instead that doesn't even connect over a separate channel.

Re:THE MESSAGE IS CLEAR (1)

gbjbaanb (229885) | about 3 years ago | (#35790716)

there's a point where 'total security' becomes 'lock yourself in a bunker'. Sending one-time-pad codes over SMS is acceptable for the majority of your daily transactions - eg internet banking - because the attackers are not going to be so sophisticated they can intercept, track and match your transactions to the SMS code.

However, what we're talking about on /. is the bigger picture, the one where you are tracked by people who do have that kind of resources and are very prepared to use it. the kind of people who are right now locking up and torturing people in various countries for daring to say anything bad about the ruling classes, just in case they turn out to be one of those pesky "revolutionaries". Countries like Iran, Libya, Yemen, United States, Syria, etc.

Re:THE MESSAGE IS CLEAR (1)

Roogna (9643) | about 3 years ago | (#35802226)

Yeah that's it exactly. Your bank transactions are probably fine with SMS, it -is- a great way of handling security for a lot of transactions. Not perfect either, but what ever is? But for the greater future when discussing a replacement to SSL, now, during the design, is when to try and push it towards something that can be as secure as possible, not just for simple bank transactions but for any communications that may be happening across the network. Now as we all know there will of course be limits to what is possible, but we should be pushing those limits out as far as possible.

RTFA (3, Informative)

Tigger's Pet (130655) | about 3 years ago | (#35784304)

I just hope that the many people who will post on here, with all their different opinions will actually take the time to read the article first. I know that is asking for a lot on /. but I can hope. Moxie Marlinspike (what a great name by the way) has really done a great piece of work here and it deserves to be read and digested before being critiqued.

Re:RTFA (1)

houstonbofh (602064) | about 3 years ago | (#35784332)

While I agree with you about the problem, I have yet to see a fix that isn't worse in the transition. Short of heavy regulation of the certificate authorities... And I hate government regulation.

Re:RTFA (2)

Tigger's Pet (130655) | about 3 years ago | (#35784380)

It shouldn't be government regulation. No one country has the right to try and regulate the Internet, even though both the US and the UK seem to think they do far too often.
This is yet another reason why we (that's the global 'we', not just the few) need a totally independent Internet regulation authority, funded by every government, to oversee the whole thing. They could make decisions for the good of the Internet, not for the good of their own little corner of the world, or because industry is threatening to withdraw funding for their next election campaign.

Re:RTFA (1)

Securityemo (1407943) | about 3 years ago | (#35784468)

The problem would be (as it is with the UN) to get all the individual forces that invariably would compose the backing of that organization to cooperate.

Re:RTFA (-1)

Anonymous Coward | about 3 years ago | (#35785560)

I guess all the countries that the US gives money to each year to help them should then be routed to this. I also guess that most all countries in the world are being attacked on the internet as much as the US. I suppose that all other governments in the world are trustworthy (more than the US/UK). I must conclude that those who have the most the lose and are willing to pay the most to protect themselves shouldn't be allowed to independently do so. We must let the world try to protect us instead.

Re:RTFA (1)

Tigger's Pet (130655) | about 3 years ago | (#35785924)

Did you actually read what you typed? Posting as an A/C to avoid a flamebait tag. You say about the UK/US being 'attacked' - no-one has suggested that they can't defend themselves, but that's got nothing to do with this article. As for other countries being attacked - your way of thinking is what leads to governments doing whatsoever they wish. There's a very slim line between the US/UK accessing e-mails whenever and wherever they wish under the guise of 'preventing terrorism', the Chinese government deciding to block whatever web-sites they want to under the guise of 'preventing civil uprising' and the Libyan government shutting down all Internet access because they don't want the truth about the atrocities they are carrying out reaching the outside world.

    Opinions are like arseholes. Everybody's got one and they're usually full of shit.

Re:RTFA (1)

AmiMoJo (196126) | about 3 years ago | (#35791132)

Hahahahahahhahahahahahhahahahaha... aha... Wait, you were being serious?!

You want a government funded institution made up of citizens of various nations to be completely independent and unmolested by political desires? Can you think of one example when something like that has worked?

Re:RTFA (2)

Securityemo (1407943) | about 3 years ago | (#35784416)

It was a wondefully terse piece of work. He's got a talent for writing. You won't be wasting your time, RTFA.

Re:RTFA (0)

Anonymous Coward | about 3 years ago | (#35790104)

Terse? That rambling, tautologous piece stuffed with rhetorical questions?

Would you really include a wasteful statement like this in an executive report:

CAs are sketchy, but this is a whole new world of sketchiness. Think, sketchasaurus.

That's not terse report writing. That's blogging.

When you're writing tersely you separate fact from opinion, dispense with frivolity and aim to deliver the key points in two or three minutes. Spend some time as a PA, or join the military.

Re:RTFA (4, Interesting)

Burdell (228580) | about 3 years ago | (#35784556)

I RTFAd, and a few things jump out at me:

- Attacking GoDaddy's trust because Bob Parsons went hunting in Africa to help farmers. Way to bring politics into a supposed technical discussion.

- Confusing management of the DNS root with domain takedowns done at the registrar level.

- Repeated use of "forever", as if certificates don't expire (and protocols never change).

I think DNSSEC could be used to augment SSL security. For example, sites could list valid key IDs in a DNSSEC-signed record. You still use CA-signed certs, but a rogue CA can't also edit your DNSSEC-signed record. It is also much easier to monitor DNS for somebody trying to change something.

Re:RTFA (2)

DarkOx (621550) | about 3 years ago | (#35784664)

Ok many of your points are fair but your issue with the word forever really isnt. Many of the root CA certs don't expire for like 30 years! In Internet which has only really existed that long and only been public for a little under 20 years and SSL even less, 30 years is for all practical discussion the same thing as forever.

Re:RTFA (3, Insightful)

Artraze (600366) | about 3 years ago | (#35784700)

I agree about the safari nonsense. Still, GoDaddy is a sleazy company that often seems to cater more to scammers and spammers than it does people that just want a domain name.

The domain takedowns bit is basically referring to the fact that ICANN is not untouchable. Practically, this isn't _too_ much different than the DHS having a trusted root certificate (as they're _probably_ the only ones that can manipulate ICANN). However, it does mean that you can't un-trust the DHS (and maybe Chinese) root certificates because the manipulation will be happening in the background. (Which isn't to say they can't/don't manipulate Verisign at the moment, but I hope you get the point.)

"Forever" is a relative term. As far as I'm concerned, it means long enough to exploit a vulnerability. Say... a month? Certificates don't expire that fast, and protocols are glacial by comparison. We're still using SSL, after all, how long do you think it'll be until we replace its replacement? Forever. Maybe even literally.

Those points made, I do agree that DNSSEC probably wouldn't hurt; the more independent sources of trust the better. Augmenting it further with a traditional web of trust would be even better too.

Re:RTFA (-1)

Anonymous Coward | about 3 years ago | (#35784720)

Attacking GoDaddy's trust because Bob Parsons went hunting in Africa to help farmers. Way to bring politics into a supposed technical discussion.

What are you, a republitard? Politics belongs in any discussion of reality because reality has a communist bias.

Re:RTFA (2)

yincrash (854885) | about 3 years ago | (#35785378)

Are you positive GoDaddy is being picked on because of the hunting thing? There are definitely many more reasons to not trust the security of GoDaddy.

Re:RTFA (3, Informative)

Culture20 (968837) | about 3 years ago | (#35786586)

Are you positive GoDaddy is being picked on because of the hunting thing?

I am. The link to the hunting is in a sentence denouncing goDaddy's trustworthiness based on his personal trustworthiness (without other reasons cited).

There are definitely many more reasons to not trust the security of GoDaddy.

Would have been nice for TFA to state them. Sure, we here at /. know those reasons, but the populace at large doesn't. Most people think GoDaddy is a porn site.

Re:RTFA (1)

Raenex (947668) | about 3 years ago | (#35786248)

Attacking GoDaddy's trust because Bob Parsons went hunting in Africa to help farmers. Way to bring politics into a supposed technical discussion.

I agree bringing in politics was a bad call. Then again, you're going to defend that dickhead CEO as "helping farmers"? Seriously, get real. He's just out to shoot some big elephants and give himself an erection. If he really wants to help farmers, he'd help them put up fences and use their wildlife for tourism.

Re:RTFA (1)

ekhben (628371) | about 3 years ago | (#35788850)

The two things that jumped out at me were that Moxie has made a faulty assumption on the trust model of DNSSEC, and that Moxie has made a faulty assumption on the trust model of web certification.

Web certification is for relying parties to determine that a host is authorised to act on behalf of a domain holder.

DNSSEC is for relying parties to eliminate the need to trust the distributed database of DNS.

The question at the bottom of the article would lead to this if it were actually answered. Who do I need to trust, and for how long?

For the current model, I need to trust the hierarchical DNS authority system, because they hold the fundamental truth of the DNS data. I need to trust the distributed DNS database system, because I have no way to check that the answer I got is the answer the domain holder published. I also need to trust the entire CA set, because they're the ones who provide a bridge from the domain holder to me.

For the DANE model, I need to trust the hierarchical DNS authority system, because they still hold the fundamental truth of the DNS data.

In both cases, "for how long" gives the useless answer of "forever."

TL;DR: Moxie has pointed out that we place an awful lot of trust in the DNS operators, but failed to demonstrate that DANE or DNSSEC is a poor substitute for the current CA system.

Re:RTFA (1)

Karellen (104380) | about 3 years ago | (#35791658)

Way to bring politics into a supposed technical discussion

I thought the main point of his article is that deciding who to trust, how long to trust them, and what to trust them with, is a political, not a technical, problem.

Re:RTFA (0)

Anonymous Coward | about 3 years ago | (#35784672)

Moxie Marlinspike (what a great name by the way)

Sure, if you happen to be an asshat.

Re:RTFA (1)

starfishsystems (834319) | about 3 years ago | (#35785970)

It's a hard subject to write about. The article contains some interesting ideas, along with a fair amount of coarse hyperbole which, unfortunately, serves to make it less credible. And nobody who writes professionally should have trouble using words like "effect" and "proscribe" correctly.

The thing is, readers take writing competence as a proxy for competence generally. It's not an unreasonable strategy. Any reader who's not already a subject matter expert is going to have to accept the writer's expertise on faith, at least provisionally, until he has argued his main points. There's no quicker way to undermine that faith than with poor writing, and that's really unfortunate, because there are some good ideas here.

In fact, the issue is not with the SSL/TLS protocol but with the essentially hierarchical characteristics and application of X.509 certificates, and of course with the operation of the human institutions known as Certificate Authorities. It's true that the setup is a bit fishy. We've allowed a situation to develop where foxes have been hired to guard the henhouse, and because of the hierarchical nature of X.509 we're left with no means of correction except to fire all the foxes and start over from scratch. This alone makes a reasonable case for configuring security controls at a finer level of granularity than at the root CAs. But if the argument is going to go anywhere, we need to see some kind of concrete proposal, not just a dismal assessment of the status quo.

Re:RTFA (1)

19thNervousBreakdown (768619) | about 3 years ago | (#35786196)

It shouldn't (and doesn't) require proposal of a good idea in order to shit all over a bad one. Why do you want to attach an arbitrary bit of work to an already valuable service?

Re:RTFA (1)

DriedClexler (814907) | about 3 years ago | (#35786446)

Oh, absolutely. You just have to get over his annoying, made-up-sounding name. I'll issue a thorough review of his ideas if and only if I can do it under the name Krypto McCypher.

Re:RTFA (1)

dash (363) | about 3 years ago | (#35790360)

You may publish under the pseudonym Krypto McCypher if you wish. You may also believe this hides your real identity. But some of us knows the truth, Mr Clexler. Some of us are smarter than you think!

Not a single fake has gotten through (0)

Anonymous Coward | about 3 years ago | (#35784466)

Any of the carrier pigeons that fail to present the secret handshake will have their message discarded and be taken in for questioning.

The main issue (5, Insightful)

dev.null.matt (2020578) | about 3 years ago | (#35784472)

It seems like the real problem is that any good solution to this issue will, by necessity, require the user to make informed decisions about who to trust and who to not trust. Based on the state of non-technical scamming, the success of confidence men throughout history and the fact that most people just want their browser to take them to whatever is linked off their friends' facebook pages, I can't see that this will ever be resolved.I mean, unless we decide to trust some body to make these decisions for people. Unfortunately, that pretty much brings us back to our current problem.

That's the main problem I see with the author's notion of "trust agility:... it requires action from Joe Sixpack users who just want their browser to work in the same way their TV does.

Re:The main issue (3, Insightful)

jonbryce (703250) | about 3 years ago | (#35784626)

The problem is that any non-technical user is going ask what buttons they need to press to get the website to work, and will then press them blindly no matter what.

Re:The main issue (2)

DarkOx (621550) | about 3 years ago | (#35784926)

Well maybe we need to admit there might not be a solution you. Sometimes you can't solve social problems with technical solutions and what is more social than the concept of trust!

Re:The main issue (4, Interesting)

increment1 (1722312) | about 3 years ago | (#35785212)

There is a reasonably straight forward technical solution, that could be implemented in a future SSL protocol, to resolve the issue of trust when you already have an account on a site. A host site can add the hash of your password to the symmetric key used after the key exchange, your browser can then do the same on your side. This is essentially using a a shared secret (the hash of your password) as part of the symmetric key. The result is that no one in the middle can intercept your communication even if they have compromised the certificate.

Since most attacks are done on people who already have accounts, this is a decent improvement in security. It will not, however, prevent spoofing a site before you have an account on it, so extra precaution would need to be taken.

The implementation of this protocol would require that when initiating an https session with the server, you need to input your account credentials to your browser (not posted to the host), which then uses them to establish the SSL session.

Re:The main issue (1)

emt377 (610337) | about 3 years ago | (#35785650)

There is a reasonably straight forward technical solution, that could be implemented in a future SSL protocol, to resolve the issue of trust when you already have an account on a site. A host site can add the hash of your password to the symmetric key used after the key exchange, your browser can then do the same on your side. This is essentially using a a shared secret (the hash of your password) as part of the symmetric key.

I like this. In addition, when logging in there's no reason to send the password, when it could be hashed with a random initialization vector. All the browser has to send is proof of knowledge of the password, not the password itself. The password only needs to be sent verbatim when it's changed and during registration. This overcomes a weakness in your proposal, namely that your scheme requires authentication to set up the connection; so it can't be used for the authentication itself. The site also needs to provide a challenge to prevent playbacks, I suppose that could be part of the IV.

Another possibility, to get rid of CAs, is some process of trusted referral. If I'm authenticated with a site I trust, then it can provide me with a token and validator for another site; I send a hash of the token and a random number to the site, which responds with a response and random number. I hash the validator with the random number of compare it to response. If it matches, I consider the site trustworthy and register with it. Once registered, it will be able to prove it's the same site I registered with (as per your suggestion above). This would require the formation of a web of trust. If felt super paranoid I could even get a site's token-validator pair from a couple of randomly selected sources in different countries, just to corroborate.

Re:The main issue (1)

increment1 (1722312) | about 3 years ago | (#35786076)

I like this. In addition, when logging in there's no reason to send the password, when it could be hashed with a random initialization vector. All the browser has to send is proof of knowledge of the password, not the password itself. The password only needs to be sent verbatim when it's changed and during registration. This overcomes a weakness in your proposal, namely that your scheme requires authentication to set up the connection; so it can't be used for the authentication itself. The site also needs to provide a challenge to prevent playbacks, I suppose that could be part of the IV.

My intention in my original description was that your password is not sent to the site, it is used for authentication by verification of the establishment of the encrypted stream. So you would go to the https site (or maybe it would be httpss for super secure... ;-), and you would be prompted (by your browser, not the site) for your credentials. You input your username and password, and only your username gets sent to the site (via regular https or what have you), and then both the remote site and your browser modify the already shared (as part of traditional SSL) symetric key with your hashed password (possibly a salted hashed password, with the remote site sending you the salt). If communication is still possible (perhaps verified with round trip random number echoed back from both the site and your browser) then authentication of both the user and the remote site has been verified.

I like your idea of shared trust / referral, but the specific implementation should be something like key signing. A site could have their certificate signed by other sites, and hopefully by one which you trust. This wouldn't be a problem for large sites, but it would still have challenges when it comes to smaller sites (who could they find to trust them, if not a CA). Ultimately, the best we may be able to hope for is for Google and other search providers to report to you (over a secure connection) the certificate that they see for a particular site (and the change history therein), so that if you see something different then you know that there might be a man in the middle attack.

Re:The main issue (1)

muckracer (1204794) | about 3 years ago | (#35790762)

> There is a reasonably straight forward technical solution, that
> could be implemented in a future SSL protocol, to resolve the
> issue of trust when you already have an account on a site. A host
> site can add the hash of your password to the symmetric key
> used after the key exchange, your browser can then do the same
> on your side.
> This is essentially using a a shared secret (the hash of your
> password) as part of the symmetric key. The result is that no one
> in the middle can intercept your communication even if they have
> compromised the certificate.

Good idea and I raise you, at least for login sites, a public key solution. When registering you don't pick a password, repeat it, type a security question etc. blabla, but you simply paste your public GPG key into the provisioned field. Not only would this serve in lieu of a site-password, but also, as in your suggestion, could be used to encrypt SSL-type exchanges between server and client browser to prevent MITM's.
If SSL public keys would be further signed (signable) by actual people and thus would be part of a human web-of-trust, we'd be a long way ahead in the right direction, IMHO.

Re:The main issue (1)

(Score.5, Interestin (865513) | about 3 years ago | (#35819336)

This already exists, it's a standardised part of TLS called TLS-SRP and TLS-PSK. No browser implements it, because it would make PKI look bad and CAs redundant if implemented.

Re:The main issue (1)

Artraze (600366) | about 3 years ago | (#35785218)

You've hit the nail on the head: security isn't easy. It requires some effort, knowledge, and concern to implement correctly. For security minded individuals, all this has, in many ways, been solved for a while now:
http://en.wikipedia.org/wiki/Web_of_trust [wikipedia.org]
The problem is that establishing and maintaining trust requires a basic understanding of what's happening and some effort. It's not hard, but Joe Sixpack doesn't doesn't want to learn anything. It's hard enough to ask them to make sure there's a little lock icon before they enter their credit card number or password...

So I think all this discussion it moot... All that's going to come of it is replacing one authority with another, and all the same basic vulnerabilities will remain with only a few minor alterations. I am, however, cautiously hoping that these discussions will yield the ability for power users to better manage security rather than be forced to either use the weak, easy model or do it totally manually (which is theoretically possible today I believe, but terribly impractical). In particular I'd like to see untrusted encrypted sessions (they exist, but every time you use one your browser kills a kitten) and a built in web of trust.

Re:The main issue (1)

Nursie (632944) | about 3 years ago | (#35789248)

Why should I trust a web of trust?

I trust my friends, and maybe their judgement in friends. Beyond that, why would I trust anyone to verify anything?

The CA system for HTTPS is hopelessly broken, this much is clear, but I genuinely don't get how a wb of trust is any better. An extended group of people vouching for each other is not my idea of trustworthy either.

don't let the probable dictate the possible (0)

Anonymous Coward | about 3 years ago | (#35786086)

It seems like the real problem is that any good solution to this issue will, by necessity, require the user to make informed decisions about who to trust and who to not trust.

But half TFA's point is that it should at least be *possible* for non-lazy users to decide who to trust. With the current system it's simply not feasible for a user to decide that because Comodo got hacked or because Verisign is co-opted by the Feds you'll stop trusting their CAs entirely, because too many sites you need will stop working.

With a better mechanism in place, those of us who choose to can protect ourselves and create tools to help protect the oblivious masses.

Delegates & Defaults (1)

pavon (30274) | about 3 years ago | (#35786444)

If the system is designed such that anyone can choose who they are going to trust, then people who can't make that decision on their own can still rely on others, such as the browser vendors, to make good default decisions for them. As it is now, it is unfeasible for either individuals or browser vendors to stop trusting large CAs due to the disruption it would cause.

Irony (2)

MobyDisk (75490) | about 3 years ago | (#35784480)

I tried to click the link, but my employer blocks thoughtcrime.org.

Re:Irony (0)

Anonymous Coward | about 3 years ago | (#35786002)

And you were still able to post this? I don't trust you.

Sure, we can replace SSL. (1)

Kenja (541830) | about 3 years ago | (#35784522)

Just as soon as we figure out how to get every browser on every platform updated to support the new standard before it goes live.

Re:Sure, we can replace SSL. (3, Informative)

Anonymous Coward | about 3 years ago | (#35784546)

The idea isn't to replace SSL, just the authenticity mechanism the browsers employ. Most of what's on the table allows browsers to use the new system and old system simultaneously, with a "both must pass" or "either can pass" setting. So it's not the transition that is difficult.

Re:Sure, we can replace SSL. (0)

Anonymous Coward | about 3 years ago | (#35784576)

Or have parallel authentication systems for the transition. New software would default to the SLL replacement system while SSL would remain in place for older technology.

TLS-SRP for the win? (2)

WaffleMonster (969671) | about 3 years ago | (#35784602)

Every SSL web site I care about requires me to login. Why not just make mutual password knowledge part of the SSL handshake and be done with it? Then even if a TLA or someone from a convention in vegas decides to take over the world at least the billions of people who have already established trust won't have to worry.

There is still a problem of initial trust when establishing an account but by punting that to the edge people would be better able to make their own decisions. Is SSL good enough? Or would they for example require me to show up in person with ID when setting up an online bank account?

The real issue with the use of DNSSEC is that CAs verify "who" you are...Not "what" domain you have..in practice I'm not sure anyone cares about that. Such a scheme would theoretically be easier to register a typo domain name close enough to a bank to scrape a steady stream of credentials from unsuspecting victims.

Re:TLS-SRP for the win? (1)

muckracer (1204794) | about 3 years ago | (#35790802)

> Every SSL web site I care about requires me to login. Why not just
> make mutual password knowledge part of the SSL handshake and
> be done with it?

Or a random challenge encrypted with your GPG key. Advantage is, you could use that for every such site.

Re:TLS-SRP for the win? (1)

(Score.5, Interestin (865513) | about 3 years ago | (#35819402)

Every SSL web site I care about requires me to login. Why not just make mutual password knowledge part of the SSL handshake and be done with it?

Because that would make CAs and PKI redundant, so no browser vendor will even consider it.

Self-signed certs + notaries? (3, Interesting)

GameboyRMH (1153867) | about 3 years ago | (#35784632)

Why not switch to self-signed certs + a notary system like Perspectives? [networknotary.org] It would at least be an improvement on today's situation, since there would be no need for CAs and there would be some MITM prevention built into the system.

Re:Self-signed certs + notaries? (2)

Terrasque (796014) | about 3 years ago | (#35785408)

Agreed. Perspectives is the best solution I've seen for this problem so far.

The only problem I see with it at the moment is either if

1. An attacker control all your internet access from before you install it
2. An attacker control the majority of the Perspectives servers.

But I don't see any of those as impassable problems. 1 can be solved by a trusted install (f.x. from a cdrom), and nr 2 can be solved by crowdsourcing (and some cleverness, like requiring the servers to be in different subnets / countries - thus really complicating things for an attacker).

But, as it threaten a current drug^H^H^H^Hbusiness cartel, and it won't make anyone rich, I doubt it will become a widely enough deployed standard.

Re:Self-signed certs + notaries? (1)

tepples (727027) | about 3 years ago | (#35792140)

1. An attacker control all your internet access from before you install it
2. An attacker control the majority of the Perspectives servers.

3. An attacker controls the SSL server's connection to the Internet. This is likely in countries with less protection of speech.

Re:Self-signed certs + notaries? (0)

Anonymous Coward | about 3 years ago | (#35785430)

The problem is that you need to trust the notaries as well. If not, then you are in the same boat. Plus, I would want to make sure that the notary's traffic is encrypted as well, in order to prevent MITM of me and the notary's. But then, how do you make sure you are talking to the real notaries and not an impostor of the site and the notaries? And in case someone says that isn't possible, an ISP or China or any other interested group that controls Internet access may want to find out.

Re:Self-signed certs vs hashstamp (off-topic) (1)

xpane (2011212) | about 3 years ago | (#35791026)

deskpane.build.236 hashstamp below
This is somewhat off-topic. Do you know of any cheap hashstamp-servers ? By hashstamp-server I mean a place where you can make a datestamped, no-edit, no-delete entry containing say the hash of one of your builds. Not having found one of those, slashdot could make money, by offering the datestamped, no-edit, no-delete feature of discussion-comments FOR JOURNAL ENTRIES. A lite discussion in my Journal for slashdot user xpane, more discussion in the Journal for slashdot user hashstamp.
So anyway for today here is the hashstamp of a release-candidate build for my big idea. Thanks for tolerating this somewhat off-topic post, is there another way I could get the no-edit, no-delete feature without needlessly bothering anyone, perhaps even paying 10-cents for a hashstamp, I WOULD !?
file C:\zzzz\zzzzWcDemo\deskPane_win32_x64_2011_APR_11_00236.zip nbytes 0x4F543D 5198909 CRC32 f66b9662 MD5 3b0e7a6e545f7e7a8a34f6bfd152940a SHA-1 6806f4b125e950685a09314c2e56b361847150a5 SHA-256 95e04cb9a0d1bb953c9c6e68a642a76add61ebc51914a4cbbb95ecff1e08d15f

Re:Self-signed certs + notaries? (0)

Anonymous Coward | about 3 years ago | (#35785748)

Yeah, but...
Here's the current list [networknotary.org] of Perspectives notaries:
cmu.ron.lcs.mit.edu
convoke.ron.lcs.mit.edu
mvn.ron.lcs.mit.edu
hostway.ron.lcs.mit.edu

Not exactly a robust distributed web of trust without a single point of failure. And I really don't want to put that much trust in MIT since the Josef Oehmen thing...

Two Faces of an Issue (2)

NicknamesAreStupid (1040118) | about 3 years ago | (#35784652)

Like many things that are too hard to grasp or solve at a technical level, people tend to shift focus to something more discernible. In this case, it is the fact that millions of people who are a part of SSL need to be persuaded to change. This is called "the devil you know verses the devil you don't know" choice. Since most of the unknown devils are of the same paradigm as the known devil, then the status quo will remain. True, there may be a catastrophe that frightens everyone to adopt whatever seems like a safe harbor at the moment. Barring that, it will probably take a new paradigm, perhaps a transparent network that can converge very very fast, making most catastrophic exploits ephemerally limited in scope. True, that would be far from perfect, but then, what is close to perfect?

Chain of Trust is Good, Penalty for Encryption Bad (5, Insightful)

inglorion_on_the_net (1965514) | about 3 years ago | (#35784806)

I don't have a big problem with the way the chain of trust works. I have software on my computer that allows me to manage the certificates that I trust. That way, I can decide for myself. Since I don't actually want to bother to do so, I defer to my operating system vendor's judgment. They provide a package containing a list of trusted certificates, which I then use. I can have as much or as little control as I want. I think this is a good system.

What I do have a problem with is the fact that many applications will use cleartext connections without complaint, but give ominous warnings when using TLS with self-signed certificates. Sure, self-signed certificates don't provide authentication, but neither do clear connections. With TLS, at least I get encryption. This should be a step up in security. At the very least, security is no worse than without TLS.

I am OK with a warning being shown the first time I connect to a service with a non-trusted SSL certificate, but I feel applications should take a page from SSH here: give a warning that isn't too ominous, and offer the chance to save the public key. Then, next time I connect, if the key matches, go right ahead without a warning. And shout if the key does not match. This should provide good security if the first contact is uncompromised. Importantly, it matches the scariness of the warnings with the risk of the situation.

Re:Chain of Trust is Good, Penalty for Encryption (1)

DarkOx (621550) | about 3 years ago | (#35785594)

I have software on my computer that allows me to manage the certificates that I trust. That way, I can decide for myself. Since I don't actually want to bother to do so, I defer to my operating system vendor's judgment.

What choice do you have other than deferring to your operating system or browser vendor's judgment? I think you are telling yourself something to avoid a feeling of helplessness. I know you can remove Comodo and such when ready they have had a breach but what can really do about the others? Do you have the resources to audit their practices? Once you remove a CA how do establish trust with any sites that use them as their authority? Do you call customer service and get them to read a thumb print to you? How do you get the contact info for CS, from the website you can't authenticate?

Really I don't think the typical user is empowered to chose at all. Even the power user has few options, the ones they do have seriously limit who that can interact with, require the make bad compromises which are every bit as bad as just trusting the vendor and the CAs.

Re:Chain of Trust is Good, Penalty for Encryption (1)

AmiMoJo (196126) | about 3 years ago | (#35791324)

The messages are warning you that a supposedly trustworthy server might not be what it claims to be. Unencrypted connections are by their nature untrustworthy so unless you want a warning whenever one is opened (hint: that would be about 20 if you open Google's search results in Chrome) there seems little point in reminding you of that fact.

The real problem is not about you, it is about the organisations that rely on SSL like banks. If users get scary security warnings they won't want to use online banking. If criminals can get hold of valid certificates for their own phishing web sites they will lose a lot of money*. Users can't be trusted to look after their own security and a significant number of them don't get the OS updates that would be required by any new system. That is why banks want you to install their own crapware or use a token - they don't really trust your PC.

What we need are fewer registrars and more oversight. Unfortunately that is next to impossible to do because each country wants its own trusted company to issue certs (e.g. China wouldn't want to rely on US certs and vice versa) and the cost has to be kept low to make it viable for small sites.

* UK banking law says they are liable, don't know about US.

MITM from day one (1)

tepples (727027) | about 3 years ago | (#35792176)

I feel applications should take a page from SSH here: give a warning that isn't too ominous, and offer the chance to save the public key.

Which doesn't help if there is a man in the middle from day one. This has happened in the wild (https://bugzilla.mozilla.org/show_bug.cgi?id=460374).

Minimum standards for CA Relying Party Agreements (5, Interesting)

Animats (122034) | about 3 years ago | (#35785128)

Certificate Authorities issue "Relying Party Agreements", which specify their obligations to users relying on their certificates. Some of these specify financial penalties payable to end users.Over the years, as with EULAs, these have been made so favorable to the CAs as to make them meaningless. (See, for example, Verisign's relying party agreement. [thawte.com] Or, worse, the one from Starfield, GoDaddy's CA. [godaddy.com] )

Now it's time to push back.

The Mozilla Foundation should issue a tough standard for CA Relying Party Agreements to get a root cert into Mozilla. One that makes CA's financially responsible for false certs they issue, with a minimum liability limit of at least $100,000. The CA must be required to post a bond. A third party consumer-oriented organization like BEUC (in the EU) or Consumer's Union (in the US), not the CA, must decide claims.

The technology behind SSL is fine. The problem is allowing CA's that aren't doing due diligence on their customers to have root certificates in major browsers. Mozilla all by itself has enough power to tighten up standards in this area. All it takes is the will.

Re:Minimum standards for CA Relying Party Agreemen (1)

19thNervousBreakdown (768619) | about 3 years ago | (#35786320)

I'm not sure Mozilla all by itself has anywhere near that level of power, but I absolutely stand behind the concept--right now, the only thing the CAs have to lose is their reputation, as if that matters at all. Hey, remember that time you stood around with your friends talking about how Verisign will just give a certificate out to any Joe Schmoe?

The only thing a corporation cares about is money, so if you want me to trust you, put your money on the line. I trust that you'll do everything necessary to protect that.

Re:Minimum standards for CA Relying Party Agreemen (0)

Anonymous Coward | about 3 years ago | (#35786948)

This is the best response by far. There is a lot of precedent; for example, a bond is required to transmit money in most US states. Keep pushing this idea.

Centralized authority (0)

Anonymous Coward | about 3 years ago | (#35785432)

Trust, coming from a central authority, that isn't elected, isn't accountable, and doesn't care about anything other than money isn't viable in the modern world, let alone the post-modern one.

Incorrect observations (0)

Anonymous Coward | about 3 years ago | (#35785688)

From the article:

So the "distributedness" of the two cases [CA and DNSSEC] is exactly the same

This is incorrect. The primary problem with the SSL CA hierarchy is that any CA can sign certificates for any domain. This is not possible with DNSSEC: There is exactly one chain of trust for a domain.

people that we have to simultaneously trust: 1. The registrars

Trusting the registry/registrar and everybody else in the domain chain is already a requirement because control of a domain is the only requirement for getting a certificate from an SSL CA. The SSL CA chain therefore does not add any security on top of DNSSEC.

Additionally, many of the players in the domain game are also SSL CAs themselves or can arguably control one of the established SSL CAs. If you don't trust the operator of a CCTLD, there's a very high chance that the reasons for your distrust apply equally to an SSL CA from the same country. And that's just one of many more entities which can betray you in the standard SSL model.

In summary: DNSSEC requires you to trust much fewer entities, all of which you already need to trust anyway.

No Trust Agility

Trust agility in SSL means that users blindly trust every SSL CA. That's the reason why a server operator can choose to go to a different CA. It's also the primary cause for the problems that standard SSL is facing.

The hope that Verisign is going to be kicked out of the browsers root CA storage if they misbehave is beyond naive. This would immediately invalidate so many certificates that the whole point of SSL would be lost. It is much more likely that Verisign loses the TLDs if they misbehave.

It's possible (1)

RyanZA (2039006) | about 3 years ago | (#35785812)

A real system to take care of this issue is possible.

There's a number of problems here, and each one needs to be addressed, but luckily none of them are really all that contradictory. I'll detail a practical system:

Problem 1: A random user needs to be able to connect into the verification system easily.
Solution: Handle this in the same way DNS is handled. When setting up your connection settings (IP address, DNS, etc), include a new setting such as 'SSL Verification Address Book' (henceforth referred to as SVAB). Since most people won't know this as they wouldn't know their DNS server, extend DHCP (the system that gives you your IP and DNS) to also give you a SVAB. The SVAB could be hosted by anyone, but would likely be hosted by your ISP.

Problem 2: A malicious user who is running the SVAB can change any details he wants, and send you his own crafted entries.
Solution: Instead of the SVAB returning the actual tokens, it will only provide the client with a list of 'SSL Verification Servers' (henceforth referred to as SVS). Each SVAB would be responsible for it's own list of SVS. When the client needs to verify a server, it will send a request for that site to all of the SVS in the list.

Problem 3: A malicious user who is running an SVS can change any details he wants, and send you his own crafted entries.
Solution: Since the client has a whole list of SVS, and will receive a response from each one, it will be very easy for the client to detect that an entry has been tampered with. To determine the correct entry, the client would simply go with the majority correct information, and display a warning. If the majority is less than some percentage (75%?), an error could be displayed instead of the warning.

Problem 4: A malicious SVAB could send you a list of SVS servers to which they have altered the entry across all of the SVS servers.
Almost Solution: If a policy is implemented in the client where the servers must come from some minimum number of top level domains, or class B internet routable IP addresses, then the task of setting up the malicious SVS servers could be made extremely difficult. In addition, your SVAB would nearly always be hosted by your ISP - and if you do not trust your ISP, you could switch to a new one. MITM attacks between you and your ISP could be possible as well, but the risk could hopefully be minimized - possibly by your ISP or trusted host for your SVAB providing you with a secure key.

Problem 5: Registering your site with the many SVS servers.
Solution: There's a number of possibilities here: a first come first serve basis whereby you would register your site with one node, and that node would propagate your certificate to other nodes as long as the entry does not already exist. Alternatively, the internet site itself could host it's key (http://internetsite/svskey) and each SVS would be in charge of retrieving the key itself, and caching it. If the key on the internet site changed, a second key might be provided in the previous key to validate the new key. At any rate, this problem should not be too difficult and I believe the internet site hosting it's own key would be by far the best solution as it would make it very easy to set up for the internet site.

So to summarize, you would have a SVAB entry in your internet settings which could be entered manually or automatically via DHCP. This SVAB would provide a list of diverse SVS, which would all be queried by the client. The majority response from the SVS servers would be considered the correct one. Registering your own internet site would be easily done by uploading a special key file.

I believe this takes care of the main fear of TFA, which is that you have to trust someone permanently. In this case, you would only have to truly trust your chosen SVAB to create a very good list of SVS servers. This system would also have the advantage of being particularly easy to implement, as well.

Re:It's possible (1)

RyanZA (2039006) | about 3 years ago | (#35785932)

Replying to my own comment, but...

I realized after posting that this isn't required in internet settings at all, obviously.

This system could be implemented as a Firefox plugin (without the DHCP part). Closer to how bit torrent works, you could install the plugin, then browse to your chosen SVAB's website and download their SVAB server's address and key into your plugin. This does leave you with transferring the key once over HTTP - but it could also be transferred via disk if required, etc. I'd also say that a single key transfer once and then you're safe until you choose to change your SVAB is a marked improvement over the current system?

At any rate, it definitely lowers the barrier to entry down to the level of a browser plugin!

Seperation of security from domain is absurd (0)

Anonymous Coward | about 3 years ago | (#35785950)

Any security scheme verifying a domain which is not part of the domain name system itself really doesn't make any sense, and is already flawed [blogspot.com] . Once the domain is verified, it itself can establish a connection based on key exchange and public key cryptography. Our domain system is really what needs an overhaul.

The problem is not with SSL. (1)

jonadab (583620) | about 3 years ago | (#35787728)

I sure wish people would stop saying "SSL" when what they mean is "https".

SSL is fine. Other protocols based on SSL (notably, SSH) are fine.

The problem is https was designed badly and uses SSL in a grossly insecure manner and is therefore fundamentally broken.

If the protocol were designed securely, the browser would check that the server cert presented either is the *same* cert that was presented last time the site was visited or at least is signed by the *same* authority cert as signed the previous one. That's a minimum, and it's overwhelmingly more important than the specific identity of the company got paid to sign the thing or whether said certificate authority is on The List. I mean, sure, check that too I suppose, but it is *vitally* important to check that the cert hasn't been issued by a new or different authority since the last time the user visited the site. (Exceptions could be made to that, I suppose, if the previous cert is expired.)

(Yes, yes, a mechanism is needed to allow first-time visitors to a site to verify the identity of the site. This is less important than the current problems, because it's difficult for an attacker to predict first-time visits. At worst you use something very similar to the current mechanism the first time the user visits any given site, at which point the current rash of problems is limited to first-time visits, which are significantly less frequent and MUCH more difficult for the attacker to predict than repeat visits. It might be possible to do better than that.)

Re:The problem is not with SSL. (1)

Nursie (632944) | about 3 years ago | (#35789190)

Came here to say this. Shame most people won't get the difference.

The problem is with the trust infrastructure around HTTPS. SSL the protocol is not at issue here.

Moxie Marlinspike is clearly a great security analyst and a pretty good writer, but it would be nice if he had mentioned the difference. SSL has far wider uses than the web, many of which make no use of public trust infrastructure, do not use http (so do not suffer from SSLStrip attacks), are limited to good ciphersuites that don't use out of date hashes or weak encryption methods and are generally A Good Thing (tm).

There are some known problems with SSL/TLS, I believe there was a data-injection flaw discovered some time ago. But it's not hopelessly broken. HTTPS is may be hopelessly broken. SSL is not.

Re:The problem is not with SSL. (0)

Anonymous Coward | about 3 years ago | (#35790780)

What a load of BS. SSH is not based on SSL and the problem is not limited to HTTPS. EVERY protocol which uses SSL/TLS is affected unless you just don't care if there is a man in them middle. If you do, you want to make sure that you are speaking with the correct (remote) end and here we are again at using certificates and check them against the hostname.

You clearly have no clue.

Re:The problem is not with SSL. (1)

maxwell demon (590494) | about 3 years ago | (#35790832)

In addition, if a new certificate is used, it could additionally be signed with the previous certificate to signify that it's a valid successor. Then when presented with a new, unknown certificate for a known site, the browser could verify the "chain signature" with the known previous certificate, and thus verify that the site is the same. This would additionally catch the case where someone buys a domain (and thus can legally get a certificate for it), but uses it for a completely different service.

vibram five fingers on sale (-1)

Anonymous Coward | about 3 years ago | (#35788432)

vibram five fingers shoes, a famous special shoes which gave us a surprise recent years. The vibram five fingers on sale [vibramsshop.com]
  brings a wonderful sense. The grip on the bottom of the shoes provides each athlete full control over their footing. Furthermore, for professional athletes one run in Vibram Five Fingers has been known to help strengthen the many seldom used, and yet very important foot muscles.

Leaving out the CAs - like with PGP/GPG (2)

bradgoodman (964302) | about 3 years ago | (#35788470)

I do like the PGP/GPG model as well, but my belief is that it could be extended more into the "real-world".

For example - if I get a (GPG Encrypted) Email from a RedHat employee, how do I know it's real? Simple. RedHat employees put their GPG Fingerprints on their physical-old-school business cards.

If businesses took this approach too, it would work.

For example, why doesn't my bank print their SSL fingerprint (assuming they have those - not really sure) on my credit card, letterhead, and other "official" materials?

Taking this kind of approach literally takes the CA out of the equation. I trust the certificate because I know it's valid - not because someone else says it is.

P.S. With a name like Moxie Marlinspike - it seems like he should have a handlebar mustache.

They're doing it (1)

lavamind (1111431) | about 3 years ago | (#35789130)

"The Monkeysphere project's goal is to extend OpenPGP's web of trust to new areas of the Internet to help us securely identify servers we connect to, as well as each other while we work online. The suite of Monkeysphere utilities provides a framework to transparently leverage the web of trust for authentication of TLS/SSL communications through the normal use of tools you are familiar with, such as your web browser0 or secure shell." See: http://web.monkeysphere.info/ [monkeysphere.info]

ghd (0)

Anonymous Coward | about 3 years ago | (#35790714)

Because of the reputation ghd hair straighteners [ghd-hair-cheap.com] have created for themselves in the UK, there is always anticipation each time something new comes out their doors. MY friend bought the Pink hair straighteners which were an exclusive deal in support of Breakthrough Breast Cancer where ghd hair straighteners [ghd-hair-cheap.com] would donate a percentage of the sale to the charity.

SSL Certificate Authorities (0)

Anonymous Coward | more than 2 years ago | (#35928600)

Internet communications are usually unencrypted, which allows anyone on a hold in the proceed between users' computers and your website, such as online credit card payments, or other important data and information, users’ and the servers are encrypted for safety. Secure Socket Layer known as SSL stands for safety and used widely for securing websites and data transacted being online. When security matters for you and server, organizations usually rely on use of "SSL Certificates" to acknowledge or process the private information from your website and server.

ClickSSL is authorized reseller to buy or renew SSL Certificate from Verisign, Thawte, GeoTrust, and RapidSSL. ClickSSL.com offers EV SSL, Code Signing Certificate, UCC Certificate, Wildcard SSL & more SSL Certificates [clickssl.com] at market Cheapest price.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...