×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IETF To Change TLS Implementation In Applications

timothy posted about 4 months ago | from the nice-orderly-scramble dept.

Security 80

Trailrunner7 writes "The NSA surveillance scandal has created ripples all across the Internet, and the latest one is a new effort from the IETF to change the way that encryption is used in a variety of critical application protocols, including HTTP and SMTP. The new TLS application working group was formed to help developers and the people who deploy their applications incorporate the encryption protocol correctly. TLS is the successor to SSL and is used to encrypt information in a variety of applications, but is most often encountered by users in their Web browsers. Sites use it to secure their communications with users, and in the wake of the revelations about the ways that the NSA is eavesdropping on email and Web traffic its use has become much more important. The IETF is trying to help ensure that it's deployed properly, reducing the errors that could make surveillance and other attacks easier."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

80 comments

IN THE MEAN TIME . . . (-1)

Anonymous Coward | about 4 months ago | (#45693647)

All your bases are belong to us !!

Well done...? (-1)

Anonymous Coward | about 4 months ago | (#45693659)

Sounds like a good initiative at first but what will this actually achieve? Seriously, the government might be prevented from catching a few terrorists and the public "feels safer." Huh, really clever thinking there.

DIdn't I read this like yesterday? (-1)

Anonymous Coward | about 4 months ago | (#45693677)

Fuck off, stupid editors. You don't deserve to run this site. I think I'm going to take it down for you.

Ok (4, Insightful)

trifish (826353) | about 4 months ago | (#45693727)

Just, please, this time, try to be more careful about who joins your working groups. And especially what their true intentions are.

Sometimes when someone tries to "simplify deployment" or "offers insight to prevent user confusion", etc., you may want to think twice. History repeats itself, you know.

Re:Ok (2)

bmimatt (1021295) | about 4 months ago | (#45693795)

And open it up for everyone on the Internet to be able to review. That's the only way to avoid sabotage. Github it.

Re:Ok (1)

Anonymous Coward | about 4 months ago | (#45693811)

GitHub? Those guys who censor repositories when they don't like it? No thanks.

join the group. I did. Most work done via mailing (5, Informative)

raymorris (2726007) | about 4 months ago | (#45694877)

This work is being done by IETF, the Internet Engineering Task Force, which is an open organization who does most of their work via their mailing list. Anyone can read the daily message archive or join. I was a member for several years and you too are welcome to lurk or join and be active.

The only caveat is please remember this is how Jon Postel, DJB, and others of similar skill get work done. Anything you post goes to the email of many of the internet's primary architects, so please read for a while first to get a feel for how the group works, then contribute in your area of expertise. When posting, you're working with the world's top experts on internet technology, so please keep that in mind.

Re:join the group. I did. Most work done via maili (1)

Anonymous Coward | about 4 months ago | (#45696403)

I was wondering why you mentioned Jon, who died over 15 years ago, but then I realized I cannot think of any equally representative names from recent years either... does that say something about IETF, or more about our capacity to annoint new leadership?

Vint Cerf (2)

raymorris (2726007) | about 4 months ago | (#45696707)

Vint Cerf is active with IETF, or was when I was.
When I was new, I very nearly publicly scolded him for posting off-topic. I composed a scolding email before realizing who I was scolding. When I saw that it was v.cerf@ I decided to let someone else, TB Lee or someone, do the scolding if necessary. That that I'm adverse to telling the emperor he has no clothes, but I was a total newbie, a baby compared to them, so I felt I should let the long-time members uphold their own code etiquette in the way they had established.

It was actually Cerf I was thinking of when I wrote Postel, but I'm sure Postel probably was, or would have been, a member.

IETF was good for me in the same way that LKML is. I have a pretty big ego and working with Cerf and people like that helps keep me right-sized. I'm reminded that I'm neither a big shot nor a newbie, but just another guy with some arbitrary level of ability. I know more than some people and I know less than some people. When I start to think I'm an expert, a look at my inbox reminds me that on any topic there are at least a hundred people more expert than I.

Re:join the group. I did. Most work done via maili (0)

Anonymous Coward | about 4 months ago | (#45703173)

There's lots of old guys still active (Vixie comes to mind right away, and the rest of the ISC and Berkeley old guards) but you're right there's damn few really young faces.

There's a certain kind of post-selfish idealist that the USA, at least, doesn't really breed any more. You're more likely to see folks like that coming from Mexico, or Finland, or South Africa...

Re:Ok (2)

AHuxley (892839) | about 4 months ago | (#45693995)

Re History repeats itself, you know.
I would suggest reading all about Engima, Enigma after ww2, the NATO/embassy encryption with 'tempest' plain text and later sales of weakened global crypto machines with junk math, early cell phones....
All this was well understood into the 1990's.
Snowden now fills in the missing US telco and US crypto gaps in US science/gov/academia.
A lot of trusted junk telco tech and code seems to have passed with great reviews :)
http://it.slashdot.org/story/11/06/06/2045203/25-of-us-hackers-are-fbicia-informers [slashdot.org]

Re:Ok (1)

mysidia (191772) | about 4 months ago | (#45696009)

I would suggest reading all about Engima, Enigma after ww2

See enigma was pretty cool, but only the military units got the crucial plugboard feature -- and they all suffered a flaw that a letter could not encode to itself.

Re: Ok (0)

Anonymous Coward | about 4 months ago | (#45697499)

It is more simple than that. We have who, what. Now. When, where, and how.

Reporting is a good.

End of certificates, please? (5, Interesting)

d33tah (2722297) | about 4 months ago | (#45693761)

Does this mean that we'll finally give up on this sick certificate-based trust scheme? It's not like Moxie hadn't proposed his own solutions, even with implementations... why don't we make THESE internet standards? Making encryption stronger is just pointless if you can fake a ceritificate.

Re:End of certificates, please? (2, Informative)

Anonymous Coward | about 4 months ago | (#45693789)

The blame should not be put on using certificates but trusting an unknown certificate just because one of 500 or so certificate authorities signed it.

Re:End of certificates, please? (3, Interesting)

d33tah (2722297) | about 4 months ago | (#45693803)

I agree. But that's what makes this model useless. We shouldn't outsource trust to CA's, but push it to the users. Let them decide who do they trust. If, after the VeriSign fiasco they don't trust VeriSign anymore, they should be able to revoke the trust without losing the ability to view 1/4 of the internet. Seriously, guys, go watch any Moxie's talk and you'll understand the issue much better.

Re:End of certificates, please? (2)

Skylinux (942824) | about 4 months ago | (#45693955)

Let them decide who do they trust.

It will fail right here. Most users can not be trusted to make informed decisions.
How could they when they don't even know the difference between a hard drive, a modem or the case housing the individual components?

I prefer your solution but the industry is not moving towards "more user rights".

Re:End of certificates, please? (2, Funny)

Anonymous Coward | about 4 months ago | (#45694279)

Maybe the government could run a CA, and then we all just trust that one CA.

Re:End of certificates, please? (0)

Anonymous Coward | about 4 months ago | (#45694363)

Maybe the government could run a CA, and then we all just trust that one CA.

What part of government should do that? The NSA?

Have you checked who voted for this government? (0)

Anonymous Coward | about 4 months ago | (#45694511)

Exactly! The same people you don't trust to make informed decisions...

Re:End of certificates, please? (1)

thogard (43403) | about 4 months ago | (#45695125)

That is the ITU's current plan. It was also a core concept of the X.400/X.500 based email systems.

Re:End of certificates, please? (1)

mysidia (191772) | about 4 months ago | (#45696499)

That is the ITU's current plan. It was also a core concept of the X.400/X.500 based email systems.

Bleh... noone would use X.500 email systems though.... isn't Microsoft Exchange the only system that uses X500 addressing for e-mail ; everyone else doing SMTP / RFC8xx style Mailbox@Example.com style email addresses ?

Re:End of certificates, please? (2)

thogard (43403) | about 4 months ago | (#45699287)

GOSSIP is still the required email system for the US DOD and other government agencies. It is just that SNMP is an allowed migration plan thanks to two words I added to a very large document a long time ago. There is no functioning X.400 email system that I know of. Exchange was catching up with the very broken ISODE which had been the reference implementation for decades.

Re:End of certificates, please? (1)

RockDoctor (15477) | about 4 months ago | (#45712265)

Pick a government, any government. There's about 200 to choose from, so that's not a huge improvement on 500-odd Certificate Authorities.

Oh, you want me to trust your government. Now, why would I do that?

Re: End of certificates, please? (1)

xelah (176252) | about 4 months ago | (#45695217)

Knowing that sort of thing wouldn't help anyway, nor would any amount of technical knowledge. You need to know who everyone is, their intentions, relationships and the intentions of those they have relationships with, their competence and their past behaviour. It's too much to expect of nearly anyone just to have their browser do what they've asked (which is connect to who they've asked to connect to). A user just wants their computer to connect to their bank/shop/email and will object to it foisting that kind of stuff on them, even if they're perfectly capable of deciding for themselves.

Re:End of certificates, please? (0)

Anonymous Coward | about 4 months ago | (#45694273)

Seriously, guys, go watch any Moxie's talk and you'll understand the issue much better.

I can't find a link to that in TFA nor in this thread. If you have one to hand, would be much appreciated.

Re:End of certificates, please? (1)

Joce640k (829181) | about 4 months ago | (#45694437)

I agree. But that's what makes this model useless. We shouldn't outsource trust to CA's, but push it to the users. Let them decide who do they trust.

Yes, most people are really good at making informed decisions based on complex technical information.

Not.

Re:End of certificates, please? (1)

Anonymous Coward | about 4 months ago | (#45693809)

And various domestic and foreign intelligence agencies are bound to run certificate authorization outfits.

Re:End of certificates, please? (2, Insightful)

Anonymous Coward | about 4 months ago | (#45693917)

Does this mean that we'll finally give up on this sick certificate-based trust scheme?

No. There are many people and institutions that are fine with the existing scheme and are not interested in adopting new techniques to thwart the NSA or whomever. The US government, for instance, will not be adopting an anti-NSA mentality any time soon, so they're not going to walk away from traditional CAs. Many businesses see no jeopardy to their business model if they continue to use cryptographic techniques that are vulnerable to the NSA or other national governments; as long as those techniques are sufficient to avoid legal jeopardy (disclosure laws, etc.) in the nations they operate in they won't concern themselves with the issue. In fact, they will almost certainly conclude that pursuing new techniques specifically to overcome these vulnerabilities will draw unwanted attention.

Sorry but there it is.

Re:End of certificates, please? (1)

davecb (6526) | about 4 months ago | (#45695087)

Organizations are generally more concerned about foreign governments, such as ones who got a "google" certificate from a nominally Dutch CA. If they get told "you may not do business with country X", they'll be specifically interested in being sure country X can't eavesdrop on them with a forged certificate.

They're already quite aware certificates can be forged: many have forged their own to snoop on their employees.

Businesses are failure-averse. If they need to adopt a new scheme for certification in order to stay in business, they will.

First things first, limiting CA's scope, please. (5, Interesting)

Anonymous Coward | about 4 months ago | (#45694027)

One of the major problems is that currently no limits to what a CA can sign, and even though there is a urgent need to do major revamp to the protocol, I would like see first that TLS 1.next would at least fill that gap.

Can someone, please, if they can justify why for example Türktrust can sign a certificate for a *.gov and .*mil domain? Or why Spanish CA issued a wildcard *.google.com to someone, please?

Limiting that to happen, should be a minimum short distance goal, implementation shouldn't be delayed many years but possibly starting from beginning 2015.

There are many ways to implement these. Adding OID's to root certificate stating policy TLD's which CA is authorized and then also verified from TLD controlling party DNS query asking RR's for that CA whether policy is current and not revoked. The protocol could be lightweight DNSCurve for example. But like I said, there are many ways doing it. Hardest one to solve would be those where no connection exist to network before offered certificate, such as 802.1x/EAP, without chicken-and-egg problems.

IMHO, now founded new work group should concentrate longer period development, but first things first. The big gaping hole in current implementation should be fixed ASAP.

Two years ago a post (Honest Achmed's Used Cars and Certificates [mozilla.org]) to apply root CA from Mozilla was funny, but not any more. The there are so many incidents with falsely issued certificates, even root certificates, that they could have admitted root to Achmed and his brother who knows few things about computers and situation wouldn't have been much worse by now.

limiting CA's scope, and use 1 (1)

anon mouse-cow-aard (443646) | about 4 months ago | (#45694823)

This... Further... It is not just that CA's can sign for any site, it is that sites can only ever use one CA. If you want to make CA's accountable (ie. when one has awful security, or is buddies with people you don't trust) then the groups and people, need to be able to un-trust them without large parts of the internet going dark. Also, Different people will trust different CA's. There is no CA that both the Chinese and American governments will trust. If you want to sell to both, you will likely need to use an approved CA for each. Similarly, privacy advocates, likely do not want to be using a CA approved by any government. So a single CA makes no sense because it has to be a trusted third party, and there is no such thing as a third party that every combination of two parties trusts. CA's should be negotiated on connection, just like ciphers. A bank could have a dozen certificates, and other, less well funded groups, might have fewer ones. When a CA gets compromised, people can drop it from their list, and use other certificates. There would be reputation systems to figure out which CA's are reasonable to trust (WoT-like systems...)

Re:First things first, limiting CA's scope, please (2)

mysidia (191772) | about 4 months ago | (#45696073)

Can someone, please, if they can justify why for example Türktrust can sign a certificate for a *.gov and .*mil domain? Or why Spanish CA issued a wildcard *.google.com to someone, please?

Personally; I would favor requiring Server certificates to be signed by a minimum of 3 CAs; perhaps by using a separate trust document file; "Third party CA a auxillary attestation of certificate trust".

The standard could then be --- at least two of the authorities must reside in different geographical jurisdictions. At least three of the attestations must be from authorities that have no business relationship with each other.

And: The user can specify a number of "points" to assign to each CA on a scale from 1 to 5; with a browser default value of 3 for major CAs such as Verisign.

If the score of the certificate is less than 8, or another value user-configured, then the cert is untrusted.

Re:First things first, limiting CA's scope, please (1)

IamTheRealMike (537420) | about 4 months ago | (#45696355)

X.509 already has a name constraints extension. The problem with TLS is not necessarily its features or design, but that often solutions or upgrades become difficult to deploy because the standard for "this works" is "every device on the planet can connect", a standard that is often unreachable when you start thinking about buggy SSL stacks in embedded devices that never get upgraded.

IF you were willing to accept, say, a 10% error rate for old devices connecting to your server, you could do all kinds of upgrades (caveat; I pulled 10% out of thin air), but in practice people are rarely willing to accept such losses in backwards compatibility for new features. TLS is a victim of its own success, in a way.

Re:End of certificates, please? (2)

SuricouRaven (1897204) | about 4 months ago | (#45694271)

So tell us, what else do you propose? CAs are useless as defending against intelligence services, but they do a pretty good job against your basic internet crime. There have been instances of fraudsters getting certificates incorrectly signed by social engineering and things like that, but such events are quite rare. A WoT model might work too, but it isn't going to offer much security against intelligence agencies either - it wouldn't be hard to influence various well-connected companies to endorse anything.

The problem with any sort of trust model is that it's impossible to authenticate a service without either a pre-shared secret or a trusted third party. Mathematically, it's not solveable, at least in a general sense.

Re:End of certificates, please? (2)

Zeinfeld (263942) | about 4 months ago | (#45699095)

The CA model was never designed to do more than support Internet commerce. It was designed to be secure enough to exchange credit card information.

CAs are not useless against defending against intelligence services, they are only vulnerable to being suborned by a limited number of such agencies, the ones that they have plant in. And any defection is visible on the Internet. Hence the use of schemes such as Comodo CertSentry and Google's Certificate Transparency which are designed to prevent covert subornation of a CA by making the results of the attack visible.

One of the many reasons security is hard is that you have to defend against all the attacks, not just one particular one that someone is obsessing about. Nobody has proposed a replacement for the CA model that works as well within the existing constraints.

Peter Eckersley proposed a scheme 'Sovereign Keys' that solves the hard problems of PKI by pretending that the system administrator will never ever make a mistake. Moxie's 'Convergence' is three years old now and we are still waiting for an actual written specification. The problem with Convergence is that it depends on a notary infrastructure that doesn't have a business model. So it is hard to see how the world of commerce is going to be keen on moving to an infrastructure that we know will have scaling issues.

The CA model isn't prefect but it is the only part of the Internet security apparatus that fails rarely enough for the failures to still be news. McAfee fails to spot viruses on an hourly basis. There are serious security fixes for Windows, OSX and Linux every single month. Those don't make the news because they aren't news any more.

The market for the proposals that are 'stronger' is essentially the same as the constituency that use PGP every day and use Tor and keep their money in BitCoin. It is not a negligible constituency but the people who are in it have to spend about a quarter of their waking moments managing their security.

Web of Trust isn't perfect either. Choosing between the two is pointless because neither meets every need that the other meets. So instead of having the argument over which one to pick we should work on ways that let people use both in a seamless connected fashion.

Re:End of certificates, please? (0)

Anonymous Coward | about 4 months ago | (#45694433)

Certificates don't say anything other than "I am this cert". How we decide to trust a cert is the issue, not the use of certs.

Re:End of certificates, please? (4, Interesting)

mysidia (191772) | about 4 months ago | (#45694897)

Making encryption stronger is just pointless if you can fake a ceritificate.

We should start, by allowing certificates to have multiple signers

Instead of everyone trusting a small number of CAs --- the certificate should bear a number of signatures, and you should be able to score a certificate based on how much you trust the signers.

Re:End of certificates, please? (1)

mSparks43 (757109) | about 4 months ago | (#45696203)

I like this idea, the whole thing is so broken atm it's a joke, the only peeps who take it seriously these days are the ones without much tech knowledge.

not sure how much it would help, but this whole "WoT" idea is definately the way forward.

Re:End of certificates, please? (0)

Anonymous Coward | about 4 months ago | (#45696473)

You know that most CAs charge $100 or more for a signing, yeah? You'd be able to maybe get two signatures that you'd have any hope of being included in browsers (StartSSL and maybe CaCert) before needing to start paying out the nose.

If there's a multiple-signer system implemented, then there needs to be free public services to sign certificates.

Re:End of certificates, please? (2)

IamTheRealMike (537420) | about 4 months ago | (#45696475)

X.509 already supports this and complex, non-hierarchical trust schemes are frequently used.

The problem is it doesn't make any difference because you still need to be able to connect to servers that are only signed by one CA, and you have no way to know ahead of time how many signers there should be for any given host. And if all clients accept one signer, why would anyone pay for two?

This idea fails for another reason - many CA's validate your websites identity by connecting to it. If you take control of a server/domain name or MITM it temporarily, you can probably find at least 3 CA's that validate ID in the same way and get all three to issue a bogus cert.

These are hard problems. Simple as that.

Re:End of certificates, please? (1)

mysidia (191772) | about 4 months ago | (#45697369)

The problem is it doesn't make any difference because you still need to be able to connect to servers that are only signed by one CA, and you have no way to know ahead of time how many signers there should be for any given host. And if all clients accept one signer, why would anyone pay for two?

My suggestion would be that browsers out of the box require a minimum of 4 current signers, for certificates issued after date X; and be configurable to require between 3 and 10 signers; at least one of the certifications must be High Assurance, at least one of the certifications must validate something other than the requestor merely being the e-mail address of a WHOIS contact for a domain. Certificates issued before date X would generate a warning message after date Y. And that signatures, not certificates, have an expiration date.

The certificate should contain attributes associated with the signature identifying what standards of validation the CA claims to have performed, and which fields additional validations were done on, before signing the certificate.

Suggest all certificates must contain (1) A valid Organization field, that must be the name of an entity authorized to operate a website, on behalf of the registrant of the domain; generic, or common names such as a DNS domain name, will be treated by the browser as a questionable cert. (2) A validatable OU or department field, certificates with generic or common names such as "Domain Control Validated", should be treated as doubtful. (2) At least one valid e-mail address; at a minimum, every CA signing a cert has verified this at the time of certification. (3) At least one valid telephone number additional attribute, belonging to a contact of the registrant, not the CA.

You may have one standard of verification; that simply consists of proving domain ownership by using e-mail to contacts, or setting DNS records. And that an automated machine contacted the phone number of the domain registrant, and presented a PIN code, which was entered online.

You might have a standard of verification; that consists of proving ownership of a DNSSEC secured zone, by publishing a signed record with specified content, within a specified TLD scope.

You might have a standard of verification, that requires proving the identity of the requestor, requires proving the identity of the organization, using a paper-based process; requiring submission of identification documents such as government-issued Identification and a LOA, matching the registrant of the domain name. SCOPED to CAs based on (1) Country and Province of company/domain registrant, (2) Country of contact requesting a certificate, and (3) TLD of domain name.

You might have a standard of verification, that requires a notarized attestation from a domain registrar, a notarized attestation from a company officer, with a LOA to the requestor, and a notarized attestation from the requestor. Scoped to CAs based on (1) Country, State, and Province of the domain registrant, and (2) TLD of the domain name

You might have a standard of verification, that requires an in-person meeting between representatives with a domain registrar, company officer, where a copy of the public key is validated in person, before a signature can be issued.

Etc.

You might have a standard of verification, that requires a government body the requestor interacted with, to provide the evidence in a certified manner, that the data is valid.

The list of means of verification used to verify the signature should be coded by the CA as part of the signature; together with an assurance level such as Low Assurance or High Assurance.

Also; the means of verification that a CA has been proven to be able to do, and security management systems that have been audited to compliance with ISO27001 standards as the CA capable of performing to adequate standard, should be part of the CA department's attributes, to achieve High Assurance.

In other words: only verifications that a specific department of the CA has proven to be doing in actual fact, should have an ability to apply a CA signature that the browser will interpret as a High Assurance certification.

CA root authorities should get only scoped certificates at a Low Assurance level they can assure any certificate, at a High Assurance Level; every root CA certificate must be scoped to a specific DNS TLD. Every root CA certificate further scoped to a specific COUNTRY of Domain registrant.

Re:End of certificates, please? (1)

mysidia (191772) | about 4 months ago | (#45696347)

In this video Moxie Marlinspike discusses the problem [youtube.com] and convergence.

The trouble with Convergence; I think, is the reliance on online notaries; which become highly-centralized single points of failure.

Remember; for the most part --- users will just use their web browser's default settings.

I believe for it to be highly scalable --- the web server must gather signed notary responses and provide these to the user for dissemination.

The internet standards should focus on changing the nature of SSL certificates to enable Web of Trust and multiple certifications of a certificate.

The work of getting multiple certifications needs to be loaded onto the webmasters, and perhaps some 3rd party authorities verifying trust; not solely the user's problems.

Re:End of certificates, please? (1)

GrievousMistake (880829) | about 4 months ago | (#45696685)

The trouble with Convergence; I think, is the reliance on online notaries; which become highly-centralized single points of failure.

They don't, really. The great thing about notaries as opposed to CAs is that you can use as many of them as you want, and the client decides how to handle discrepancies and outages. So a browser could ship preconfigured with 8 independent notaries, and alert the user if more than four of them were down, or if any single one of them disagreed with the rest.

In the same way, CAs can still act as authoritative notaries for domains they have signed. But now if they misbehave they can be instantly delisted, and users will fall back on the standard Convergence protection.

Re:End of certificates, please? (1)

Alarash (746254) | about 4 months ago | (#45703311)

The more immediate problem is that a valid Certificate, even if it's "reasonably" signed, costs quite a lot of money. The VeriSign Mafia and others took that market over, and there's no 'free' (as in beer or speech) standardized alternative. I'm working on a number of small- medium-sized projects and if I had to buy a certificate for all of them, that would add up to a hefty sum (the projects are free of ads and subscription, so I make no money at all from them). I wish I had a secure, safe and viable alternative. CACerts.org isn't because their CA isn't in the list of browsers.

Require challenge response (1)

Anonymous Coward | about 4 months ago | (#45693857)

Both HTTPS and SMTP use TLS to encrypt but usually have you send the password "in plaintext" within the encrypted channel. That's a grievous mistake. You should never, ever transmit passwords.

Re:Require challenge response (0)

Anonymous Coward | about 4 months ago | (#45693979)

Both HTTPS and SMTP use TLS to encrypt but usually have you send the password "in plaintext" within the encrypted channel. That's a grievous mistake. You should never, ever transmit passwords.

It depends on how you authenticate. HTTP Basic and HTTP Digest doesn't send your password in clear text.

Just don't use a HTML <form> with <input type="password"> for authentication purposes.

Re:Require challenge response (0)

Anonymous Coward | about 4 months ago | (#45694171)

HTTP Basic sends your password in clear text (base64).

Re:Require challenge response (1)

Joce640k (829181) | about 4 months ago | (#45694439)

HTTP Basic sends your password in clear text (base64).

True, but you're supposed to make an SSL connection before you send it.

Re:Require challenge response (0)

Anonymous Coward | about 4 months ago | (#45694791)

HTTP Basic sends your password in clear text (base64).

True, but you're supposed to make an SSL connection before you send it.

Back to square one. SSL is as good as plain text because any security agency can spoof any address. If you send your password in the plain, the agency (or other criminal organization) can replay your password to your bank and transfer your money away.

Re:Require challenge response (1)

Anonymous Coward | about 4 months ago | (#45694807)

The point being that if every NSA agent and their BEAST can read your SSL stream, they can read your password.

The correct way to do things would be widespread implementation of CRAM-MD5 or better yet, its successor SCRAM [ietf.org]. These authentication algorithms do not require the host to hold a plaintext version of your password nor do they require you to send a plaintext version of your password.

Re:Require challenge response (1)

gweihir (88907) | about 4 months ago | (#45694185)

That is BS. For example, it is perfectly fine to transmit passwords through SSH if the server authentication checked out.

Re:Require challenge response (1)

WuphonsReach (684551) | about 4 months ago | (#45701317)

It may be okay because the tunnel is encrypted with SSL or SSH -- but is still not best-design. Better designs never send the password, or even the password has over the wire.

At which point, you no longer have to care whether or not the tunnel is encrypted.

Re:Require challenge response (1)

gweihir (88907) | about 4 months ago | (#45704123)

There is no general "best design". For some cases this is perfectly acceptable and there is absolutely no general requirement to "never send the password". It depends on the scenario and in some it is perfectly fine and secure and the best solution.

Using generalities like your is a sure way to end up with an insecure design. That is about the only generality valid in the security field.

Re:Require challenge response (0)

Anonymous Coward | about 4 months ago | (#45694289)

You argument seems to be against passwords as an authentication method since whether it is plaintext, encoded, or hashed the risk is the same. All three are subject to replay/pass the hash. The only alternative is multifactor auth using one use tokens (which most people consider too costly to implement or too complex/difficult for the average user).

Re:Require challenge response (0)

Anonymous Coward | about 4 months ago | (#45694349)

You argument seems to be against passwords as an authentication method since whether it is plaintext, encoded, or hashed the risk is the same. All three are subject to replay/pass the hash. The only alternative is multifactor auth using one use tokens (which most people consider too costly to implement or too complex/difficult for the average user).

No, not if you use a challenge-response mechanism with the password. Then an eavesdropper can only re-use a response if the server happens to re-use a challenge (which should take centuries if the server is properly using large random challenges). The risk to challenge-response mechanisms is active man-in-the-middle interference, which digital signatures can prevent.

You can also use more sophisticated schemes such as SRP, which allow the user to prove their know the password and the server to prove it knows a 'verifier' linked to that password, without an eavesdropper learning anything they can use in future.

Not directly related but... (1)

Anonymous Coward | about 4 months ago | (#45693865)

http://blog.djm.net.au/2013/11/chacha20-and-poly1305-in-openssh.html

still relevant information. :)

Back To Basics (-1)

Anonymous Coward | about 4 months ago | (#45693901)

Generalissimo Excellency Alexander understands the concept of "death."

Therefore, 'death' should visit Generalissimo Excellency Alexander in just a few minutes to confirm.

Please wait for confirm.

We did it wrong, let's do it wronger still. (2, Interesting)

VortexCortex (1117377) | about 4 months ago | (#45693915)

Consider this scenario: You're about to connect to a resource, but the service says you need to authenticate. So, a browser native login box pops up. You enter your credentials and those are hashed with the session nonce to key the symmetric ciphers, the connection then commences since the server already has a shared secret with you. It's not like HTTP Auth doesn't exist, it's just that TLS is ignorant of HTTP. If you have a secret GUID on the system, you can hash the domain name of the server with it to produce a unique user ID for each server. Same goes for a master password: HMAC( MPW, Domain + Salt ) = Session generator. HMAC( gen, nonce ) = session key. There, now you don't have to create a new account everywhere you go. One login for everywhere, and the sites can associate a nickname with the UID code if they want. Change your salt and you change all logins everywhere without having to change your password. Only time public key crypto is needed is when you exchange the UID and generator, for everything else use symmetric encryption with the pre shared secret. The window is small enough that PKI is moot.

Besides all the browsers explicitly trust cert roots in known enemy states, so PKI is moot anyway. FF > Preferences > Advanced > Certificates > View Certificates > Hong Kong Post... They can create a cert for Google.com without Google's permission and if they're a MITM, you get a big green secure bar and everything. No one checks the cert chain every time (shouldn't have to), and even if they did the govs can compel the CAs to generate certs under secret orders, or just infect them with a zero-day and do it themselves. All your retard is belong to IETF.

Oh, hey, while you brilliant bastards are at it, why don't you give us salted hashes in the HTML tags that pull in external resources? This way the encrypted page can specify validation codes for the external cache-able content and we can be sure it's not tampered with. Let's end that whole "mixed content" warning bullshit by making HTTP and TLS minimally aware of each other. <img src="..." hash="SHA2/base64: SUVURiBpcyBhbiBveHltb3Jvbi4K" salt="..."> Wow! How fucking difficult is that! Oh, snap! There must be some kind of alien super intelligence infecting my brain, hide your kids, hide your whitepapers!

Oh, That's Right. TRANSPORT Layer security. Ah, gotcha. Sorry, wouldn't want to make you all look like a bunch of morons by suggesting that the layers are just arbitrary lines you don't let interactions across for no damn reason except to further ensure nothing on the net is fucking secure. You know, not like it didn't take me all of one session thinking about this while taking a shit to figure out how you ruined everything, IETF. Internet Eradication Terrorist Fucks? That's what it means, doesn't it?

Earth: The longest running case study in how not to advance as a space faring race.

Re:We did it wrong, let's do it wronger still. (0)

Anonymous Coward | about 4 months ago | (#45693961)

What a load of gibberish. Back to NSA with you, lad!

Re:We did it wrong, let's do it wronger still. (0)

Anonymous Coward | about 4 months ago | (#45693993)

I think VertexCortex is working for Earth, and not for the NSA. His arguments here are logical, even if they're not exactly pragmatic. Anyone who calls him stupid or treacherous is either a fool or a liar themselves.
                        —PublicBore

Re:We did it wrong, let's do it wronger still. (0)

Anonymous Coward | about 4 months ago | (#45694043)

"alien super intelligence infecting my brain" - I suspect not.

Re:We did it wrong, let's do it wronger still. (4, Interesting)

mattpalmer1086 (707360) | about 4 months ago | (#45694097)

Well, I can't really make out what you're proposing here.

As far as I can see, the client side has three secrets to maintain - the GUID, master password and salt. If the GUID is unique to a computer, your accounts only work from a single machine, and if you lose the GUID then you lose access to all your accounts. Correct?

The nonce is a "number used once" - i.e. randomly generated for each session in a cryptographically sound way.... so how do the server and client negotiate the nonce for each session? Does one pick it and encrypt it to send to the other? Do they both participate in picking it? Do they use something like Diffie-Hellman to arrive at the value?

I really don't understand your point about changing the salt equals changing your logins without affecting your password. Do you mean if I wanted to lose access to all my accounts everywhere and begin again, I wouldn't have to change my password?

And... how do you know you're talking to the right server in the first place? I don't see any server authentication at all in your proposal.

That's enough for now. The one thing I've learned from studying protocols is that it's really, really hard to get right. Not because the people creating them are dumb or have malicious intent. It may well be time to start creating a new protocol to replace TLS eventually, using what we now know about trust, authenticated encryption, protecting the handshake and side channel attacks. And possibly using some new techniques in there, like identity-based encryption...

Re:We did it wrong, let's do it wronger still. (0)

Anonymous Coward | about 4 months ago | (#45694161)

I applaud you Matt Palmer! You address many of VertexCortex's ambiguous arguments. They're important issues that need to be explored. (GUID multiplicity; elaborating the nonce; login/password dichotomy in the context of omnipresent access.) But I disagree with you when you state that VertexCortex has made a proposal. To my eye he has not presented himself a proponent of anything at all. Everything else you said, though is excellent. You help enlighten us.
                        —PublicBore

Re:We did it wrong, let's do it wronger still. (1)

SuricouRaven (1897204) | about 4 months ago | (#45694285)

What he proposes, best I can see, is moving website login away from strictly the domain of HTTP where it is separate from TLS, and instead making it part of the cryptographic authentication. So when you log into slashdot, your password acts as a shared secret - even if an attacker is able to intercept and modify all communications, without the shared secret they couldn't generate the appropriate shared key and so couldn't decrypt or impersonate.

Obvious weaknesses that I see:
- Completly reworking the TLS/HTTP authentication stack with significent architectural changes in browser, server and website scripts.
- All websites require an account to authenticate, leading to massive proliferation of passwords and inevitable reuse.
- Provides security only after signup over secure channel - the initial account creation process would still be very vulnerable to impersonation attacks.

Re:We did it wrong, let's do it wronger still. (1)

dkf (304284) | about 4 months ago | (#45694805)

What he proposes, best I can see, is moving website login away from strictly the domain of HTTP where it is separate from TLS, and instead making it part of the cryptographic authentication.

On one level, that's trivial to do: turn on requiring client certificates in the TLS negotiation. The hard part is that users — people really — really hate being exposed to having to know details of security. This was discovered in depth and at length during the fad for Grid Computing ten years ago: the biggest hurdle by far was setting up users with proper identities. You could theoretically transfer the responsibility for that to a separate service, but then that gives a point where it much easier for governments to subvert things.

Re:We did it wrong, let's do it wronger still. (1)

naasking (94116) | about 4 months ago | (#45695141)

ID-based encryption is a terrible idea. There is no such thing as a public, unique, non-cryptographic name. See Zooko's triangle.

Re:We did it wrong, let's do it wronger still. (1)

mattpalmer1086 (707360) | about 4 months ago | (#45695515)

I'm no expert on id-based encryption, although I can just about understand how it works. It has some attractive properties as well as some serious downsides.

Pros:
  * An encryptor can pick a public key at random for a recipient known to the decrypting authority.
  * No prior arrangement is required except for knowledge of the public parameters of the authority, and a recipient to send a message to.

Cons:
  * The private key of the recipient can be calculated at any time by the decrypting authority.
  * The recipient must authenticate to the decrypting authority to receive the private key for the sender-chosen public key.
  * All messages in the past and in the future can always be decrypted by the decrypting authority at any time.
  * You have to trust this authority absolutely.

The fact that the private key can be calculated from the public key and the master secrets is actually a pro as well as a con. This is what lets the sender choose a public key of their choosing with no prior arrangement.

I've seen this work quite well in one setting - payment messages from secure pin entry devices to the payment processor. In this case, the payment processor can decrypt all payment messages at any time, but each message is sent using a different key for each transaction, chosen by the low power pin entry device, and requiring no interaction between them and the processor.

On reflection, it's probably not a good candidate for inclusion into a protocol that would replace TLS. I can't really see how it provides anything useful in that setting. Still, it was just an example of some of the cool ideas being realised in more modern cryptography :)

Re:We did it wrong, let's do it wronger still. (1)

Zeinfeld (263942) | about 4 months ago | (#45699583)

The problem with ID based encryption is revocation. If someone loses their key the best you can do is to tell people that it is bad. And any mechanism that could tell you the key status could be used for key binding.

So the only applications where it really works is in low level device type schemes where the crypto is installed during manufacture.

Re:We did it wrong, let's do it wronger still. (1)

Anonymous Coward | about 4 months ago | (#45697435)

One really nice thing to read about protocol design is "Designing an Authentication System: a Dialogue in Four Scenes"

http://web.mit.edu/Kerberos/dialogue.html

It's about how Kerberos was invented, and it's done quite nicely as a play, of all things.

Elliptic curves (0)

Anonymous Coward | about 4 months ago | (#45694011)

Remember, NSA has pushed EC technology as something YOU should use (Suite B), as opposed to what THEY use (suite A).

I understand most of the acronyms but (0)

Anonymous Coward | about 4 months ago | (#45694409)

Who TF is IETF

Re:I understand most of the acronyms but (1)

Joce640k (829181) | about 4 months ago | (#45694451)

I typed "IETF" into google and the very first hit was the answer.

If you don't know how to use google then I doubt you have anything useful to add here. Please move along to the next story.

Consequence of http2 tls deadlock (0)

Anonymous Coward | about 4 months ago | (#45696893)

This is quite funny

1. Step1 : Google and facebook and other silicon valley giants start massive scale people monitoring that would shame Big brother itself (aka data-mining in IPO speak)
2. They clamor privacy does not exist to make regulators go away
3. Spooks notice, applaud and proceed to raid the resulting intel trove
4. A lone person with some conscience left denounces spooks to journalists (big users of web trackers to try to scrounge enough money to survive while paper editions sink)
5. Silicon valley giants are enraged by the reminder they're not above states and are scared dead of the mass potential loss of revenue if citizens and foreign states started rejecting their data mining. They discover they care about privacy after all, as long as it does not involve their activities
6. Google pushes frantically a new http revision which is TLS-only to the ietf ('cos Google is nerd-kingdom, tech is solution to all social problems, and anything is better than envisaging the monitoring may have been wrong in the first place)
7. Months of debate on the httpbis workgroup. Not everyone has some data-mining to protect, and a lot of people think CAs and certificate handling in browsers like Chrome sucksgreatly (see httpbis mailing list archives). Besides, the proposal would kill proxies and lots of people rely on them
8. the problem is punted to a new ietf wg, tasked with finding if other protocol communities think tls as it exists sucks too. Hopefully it will help dilute TLS opposition in httpbis?

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...