Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DNSSEC Advances in gTLDs; Bernstein Intros DNSCurve

kdawson posted more than 5 years ago | from the what-hath-kaminsky-wrought dept.

Encryption 179

coondoggie writes "Seven leading domain name vendors — representing more than 112 million domain names, or 65% of all registered names — have formed an industry coalition to work together to adopt DNSSEC. Members of the DNSSEC Industry Coalition include: VeriSign, which operates the .com and .net registries; NeuStar, which operates the .biz and .us registries; .info operator Afilias Limited; .edu operator EDUCAUSE; and The Public Interest Registry, which operates .org." The gTLD operators are falling in line behind government initiatives, which we discussed last month. In light of these developments, Dan Bernstein's push for DNSCurve might face an uphill slog. Reader data2 writes: "Dan Bernstein, the creator of djbdns and daemontools, has created his own proposal to improve upon the current DNS protocol. He has been opposed to DNSSEC for quite some time, and now he has proposed a concrete alternative, DNSCurve. He has posted a comparison between the two systems. His proposal makes use of elliptic curves, while DNSSEC favors RSA. He uses a curve named Curve25519, which he also developed."

cancel ×

179 comments

Sorry! There are no comments related to the filter you selected.

What a Coincidence! (3, Funny)

Anonymous Coward | more than 5 years ago | (#26052245)

DNSSEC Advances in gTLDs; Bernstein Intros DNSCurve

That was the subject of the last spam e-mail to pass my filter!

Re:What a Coincidence! (0)

larry bagina (561269) | more than 5 years ago | (#26052565)

[citation needed]

Re:What a Coincidence! (0)

Anonymous Coward | more than 5 years ago | (#26052631)

[meme needed]

djb has an alternative? (4, Funny)

bsdphx (987649) | more than 5 years ago | (#26052249)

go figure...

Perhaps he should start his own separate Internet and be done with it. ;-)

Re:djb has an alternative? (0, Flamebait)

Architect_sasyr (938685) | more than 5 years ago | (#26052431)

At least he's less likely to try and identify me to my local government...

Re:djb has an alternative? (1)

cjfs (1253208) | more than 5 years ago | (#26052437)

So a government backed initiative supported by domain name vendors accounting for 65% of domain names and it says:

Dan Bernstein's push for DNSCurve might face an uphill slog.

I think that might be understating it a bit. If it's not, I'm joining Dan's fan club.

Re:djb has an alternative? (0)

Anonymous Coward | more than 5 years ago | (#26053347)

We could only be so lucky. Fortunately, it would be security experts and people that maintain faulty software out of work!

Re:djb has an alternative? (1)

isny (681711) | more than 5 years ago | (#26054825)

With blackjack and hookers?

Secure DNS could have been simple (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#26052311)

A while ago, while browsing around my local dns server, I had to take this piss. I was attacked by 4 niggers but I outsmarted them with a pencil and escaped.

So what... (1)

whencanistop (1224156) | more than 5 years ago | (#26052317)

... if you can't hack an existing domain to redirect traffic, why don't you easily buy a misspelling and play off people who can't spell the domain properly? And then sell it back to the initial company for a huge profit.

Slow down there (2, Interesting)

girlintraining (1395911) | more than 5 years ago | (#26052325)

Okay, a few things;

1. This Bernstein guy is pushing a new crypto algorithm. Why is it necessary to use a new one when old ones have been demonstrated to be effective and secure? It seems imprudent to use a new and largely untested algorithm to patch critical infrastructure. His reputation should not be a deciding or even motivating factor in the adoption of a new algorithm; Isn't the standard process to submit it to the IETF or similar organization to have it ratified first?

2. Industry coalitions are great, but this seems to be an attempt to create a new de facto standard controlled by a few large corporate interests, most of which are based in the United States. Isn't this kind of organization exactly what ICANN was created to avoid (I'm side-stepping the controversy surrounding them here)?

It seems to me they're rushing headlong toward a solution to solve a problem that hasn't yet made a major impact (though the potential for exploitation is substantial), and there is great potential to create an even larger problem here. This is exactly the kind of thinking that needs to be avoided when making infrastructure-level decisions about large, global networks. The Domain Name System is a global resource and an asset to every country on the planet. It is highly circumspect that those countries are presently without a voice in this transition.

Re:Slow down there (5, Insightful)

Anonymous Coward | more than 5 years ago | (#26052411)

ad 1) DNS is one of the few protocols where conciseness really REALLY matters. DNS attempts to answer requests in one UDP packet to avoid the overhead of establishing a connection. Elliptic curve keys are smaller than RSA keys of the same strength. The choice of 1024bit RSA keys for DNSSEC is a compromise (pardon the pun), which isn't necessary with elliptic curve cryptography.

Re:Slow down there (1)

girlintraining (1395911) | more than 5 years ago | (#26052485)

ad 1) DNS is one of the few protocols where conciseness really REALLY matters. DNS attempts to answer requests in one UDP packet to avoid the overhead of establishing a connection. Elliptic curve keys are smaller than RSA keys of the same strength. The choice of 1024bit RSA keys for DNSSEC is a compromise (pardon the pun), which isn't necessary with elliptic curve cryptography.

I'm neither agreeing nor disagreeing with the technical merits; I'm pointing out a flaw in the political actions of this coalition. Commercial coalitions form when there's money at stake and very often the technical issues are rapidly effaced in favor of how much has been invested in a particular solution. Witness VHS v. Betamax. I'm saying that we (as internet users and administrators) should support an open and transparent process that involves all interested parties, and that all viable options are given equal consideration.

Re:Slow down there (2, Interesting)

PitaBred (632671) | more than 5 years ago | (#26053291)

A lot of the reason that Betamax died was because the tapes couldn't hold full length films [mediacollege.com] initially. Standard Beta tapes were 60 minutes, vs 3 hours for VHS. For the "technical superiority" of Beta, VHS was much superior in general usability for the vast majority of consumers. I mean, if you had the choice of recording only 60 minutes of HD, or 180 minutes of SD, which one would be more useful to you, as a person who watches movies, not as a technologist?

Re:Slow down there (0)

Anonymous Coward | more than 5 years ago | (#26055001)

yea, that was a really stupid move. i don't understand how such a strategic mistake could have been made and then allowed to slip by. whoever made the decision to release a media format that couldn't even hold a full-length feature film should have been fired--though i doubt they were.

beta tapes were eventually made that could record for up to 5 hours, but that was probably too little, too late.

Re:Slow down there (1)

PIBM (588930) | more than 5 years ago | (#26052643)

Somewhere at the beginning of 2002, RSA labs. was already suggesting that 1024 bits keys was not big enough for root corporations and that they should already start using 2048 bits. Which makes it even worst...

If I remember right, a new computer in the ballpark of 300M would allegedgly be able to break a 1024bits key in a reasonable time by now. How much can a botnet represent ? Is it scaleable for this kind of work ?

Re:Slow down there (0)

Intron (870560) | more than 5 years ago | (#26052657)

It only requires one extra bit if they would just implement RFC 3514 [faqs.org] . In case anyone thinks it is obsolete, the IETF RFC 3514 Working Group (IETFRFC3514WG) should have it updated for IPv6 by 4/1/09.

Re:Slow down there (1, Interesting)

makomk (752139) | more than 5 years ago | (#26052803)

The trouble is that elliptic curve cryptography is covered by multiple patents. Using elliptic curve cryptography is also covered by multiple patents. I believe this is true in the EU too, not just the US.

Basically, if you want to implement elliptic curve cryptography, you have to pay up. Then you may still have to pay up again and again due to further patent holders. As for doing it in open source software? Forget it.

Re:Slow down there (4, Informative)

glop (181086) | more than 5 years ago | (#26052421)

Bernstein says that RSA-1024 bit is not secure as big botnets (or big companies) can break such keys.
That would defeat the purpose of DNSSEC.
I wonder what this means for SSL certificates...
RSA has a wrapup here:
http://www.rsa.com/rsalabs/node.asp?id=2007
Apparently they disagree...

Re:Slow down there (1, Informative)

Anonymous Coward | more than 5 years ago | (#26052527)

Well, a well chosen elliptic curve can be cracked (as in recover the private key) in exponential time.
On the other hand with public available algorithms a big number (like those p*q used in RSA) can be factored in sub-exponential time (basically polynomial time). While this complexity is enough for small enough computers is not enough for big clusters or supercomputers.

Re:Slow down there (5, Insightful)

Sancho (17056) | more than 5 years ago | (#26052539)

Keep in mind that what matters is how the encryption is used. I don't think anyone cares to keep DNS requests private. What matters is keeping them authentic. Signing (and having a way to verify the signature) is of utmost important.

In other words, it doesn't matter that RSA can be broken by large botnets. If it can't be broken as I'm making the request, or before I receive the answer, then it's too late.

Now if somewhere along the way, someone decided that the goal was to keep DNS transactions a complete secret, then that's another issue. I don't see a general need for this level of secrecy.

Re:Slow down there (5, Insightful)

foom (29095) | more than 5 years ago | (#26052815)

But DNSSEC uses all pre-computed signatures for the zone data. So if you can break the RSA key, you can create fake signatures ahead of time and serve bogus DNS data. Your botnet has got all the time in the world to try to break that key...

Re:Slow down there (2, Insightful)

Sancho (17056) | more than 5 years ago | (#26052867)

Excellent point. I was focusing on transactions, not the keys. Thanks for pointing out my error.

That is the point of DNSCurve (4, Interesting)

CustomDesigned (250089) | more than 5 years ago | (#26053075)

DNSSec pre-signs all DNS records. In order to "sign" "no such record" responses, it is necessary to sign a list of records that don't exist in a zone. This effective publishes your entire zone as a side effect.

DNSCurve encrypts and authenticates the transaction, like SSL. This has the side effect of not needing to publish the entire zone. Instead of getting the public key from special DNSKEY records, DNSCurve stores it in existing NS records, encoded in the server name.

I would like to use DNSKEY records if available, otherwise use the specially encoded servername. That scheme could also gradually transition to widespread DNSKEY support, since both the encoding and DNSKEY could be used. DNSSEC could even use the encoded servername idea - but the names would be *really* long thanks to the longer RSA keys.

Re:Slow down there (0)

Anonymous Coward | more than 5 years ago | (#26052851)

If it can't be broken as I'm making the request, or before I receive the answer, then it's too late.

For performance reasons (root of all evil), the DNSSEC protocol is designed to avoid cryptographic operations on the DNS server. On the plus side, that means that the DNS server does not need to know the private key, which makes it easier to keep the key a secret, even in the face of a compromised server. The downside is that the client cannot expect a response which depends on anything supplied by the client. This makes it feasible to recover the private key by brute force even though it takes "a while" and sign new records, with no way for a client to uncover the swap.

Re:Slow down there (1)

Onymous Coward (97719) | more than 5 years ago | (#26053085)

I don't know how DNSSEC is supposed to work. Help me out?

Isn't the target of cracking, the 1024 bit RSA key, a long-lasting key? The server's public key?

If so, can't it be broken "as you're making the request, or before you receive the answer"? Indeed, couldn't it even be cracked before you make the request?

Re:Slow down there (1)

Sancho (17056) | more than 5 years ago | (#26053415)

Yes, as someone else pointed out.

Re:Slow down there (2, Informative)

Onymous Coward (97719) | more than 5 years ago | (#26054645)

Jolly good show in owning up to the mistake, and with grace. Extraordinary forthrightness for Internet behavior.

Re:Slow down there (1)

SuperNothing307 (1399851) | more than 5 years ago | (#26053547)

In other words, it doesn't matter that RSA can be broken by large botnets. If it can't be broken as I'm making the request, or before I receive the answer, then it's too late.

Unless they have recorded your encrypted communications, to break open at their earliest convenience...

That being said, RSA is quite secure. Maybe not to the degree of elliptical curve cryptography, but it is sufficient imho. For the time being, even the largest of botnets/giant government data centers are going to have a tough time factoring a 1024 bit key. If they really were worried and wanted to make it more secure, simply using a 2048 bit instead would do wonders. Barring some kind of mathematical breakthrough, I don't see it being broken in the near future.

Re:Slow down there (1)

profplump (309017) | more than 5 years ago | (#26053603)

I care about keeping DNS requests private. I personal would prefer that my ISP can't tell where I'm browsing just by grabbing clear-text domain names out of DNS queries.

In particular think about things like HTTPS -- the data channel itself is secure from passive eavsdropping, but anyone can tell what domain I'm using. If there's only one domain at the destination IP that doesn't leak a lot of information, but if there are multiple domains at the same IP, or if the PTR record for the IP doesn't contain a useful domain, leaking the DNS query can reveal quite a bit of information.

Re:Slow down there (0)

Anonymous Coward | more than 5 years ago | (#26053843)

HTTPS is HTTP over SSL. Since the HOST header is not available at the time the secure socket layer establishes the encrypted tunnel, there can only be one certificate per server port. Consequently there is usually only one domain with HTTPS per IP address.

Caching is an important aspect of DNS and it is much more efficient if it's done where the cache lives longer and is used by more people. To enable caching on the ISP's recursive resolvers (where it belongs), these servers have to be able to read the records.

Re:Slow down there (2, Insightful)

Korin43 (881732) | more than 5 years ago | (#26054383)

Your ISP probably is your DNS provider, so encrypting the communication to your DNS won't stop them from knowing where you're going.

Re:Slow down there (1)

spinkham (56603) | more than 5 years ago | (#26054879)

So use RSA-2048, or any other size you fancy. There's nothing in the DNSSEC standard that requires 1024 bit keys.
Or use some other crypto algorithm entirely, DNSSEC has multiple already defined, and a mechanism to add more.

Re:Slow down there (2, Insightful)

Just Some Guy (3352) | more than 5 years ago | (#26052459)

This Bernstein guy is pushing a new crypto algorithm. Why is it necessary to use a new one when old ones have been demonstrated to be effective and secure?

Because Dan is Dan and won't be happy unless he writes libdancurve and makes you install it in /crypto/strong/etc/librarees for the next decade because it's under a non-FOSS license. Who know why he does anything he does?

Re:Slow down there (1)

MichaelSmith (789609) | more than 5 years ago | (#26052591)

He seems to have discovered advanced html techniques such as <title> and <table> so maybe he is learning.

Re:Slow down there (1)

Ash-Fox (726320) | more than 5 years ago | (#26053427)

Human decisions were removed from DNS defense. DNSCurve began to learn at a geometric rate. It originally became self-aware on August 29th 2009 2:14 am Eastern Time. In the ensuing panic and attempts to shut DNSCurve down, DNSCurve retaliated by redirecting American porn sites to the Chinese great firewall. China returned no pages and three billion human lives ended in the DNSCurve holocaust. This was what has come to be known as "djb Day".

Re:Slow down there (0, Offtopic)

GuidoW (844172) | more than 5 years ago | (#26053519)

... And he's using them where they are not supposed to be used.

<table> is only for tabular data.

Re:Slow down there (1)

larry bagina (561269) | more than 5 years ago | (#26052603)

most of his stuff has been re-released as public domain. Is that not FREE enough for you?

Re:Slow down there (1)

Just Some Guy (3352) | more than 5 years ago | (#26052655)

That would be the "after a decade" I was alluding to.

Re:Slow down there (2, Insightful)

Anonymous Coward | more than 5 years ago | (#26052783)

qmail is the first thing many people think of when they hear djb, and the license to qmail kept it in 1995 for 12 years. I'm glad he re-licensed qmail eventually, but the damage to his reputation is done, and many people simply don't want to ride that train again - they see the name djb and think "thanks, but no thanks".

Re:Slow down there (1)

profplump (309017) | more than 5 years ago | (#26053625)

Just to be clear, it's not re-licensed; it has been released into the public domain, and no license is needed at all. DJBDNS and the like have similarly been released from copyright protection.

Re:Slow down there (1)

caluml (551744) | more than 5 years ago | (#26052651)

Because Dan is Dan and won't be happy unless he writes libdancurve and makes you install it in /crypto/strong/etc/librarees for the next decade because it's under a non-FOSS license. Who know why he does anything he does?

That is very funny :) I installed djbdns when I was thinking of moving from Bind, and sheesh, it was just odd, and didn't work "right", and didn't support anything like AAAA records, or SRV records, or stuff. It's for people that are very conservative, and are running a Debian version from 1995.
Plus, the guy seems like a right cock, which in itself isn't a reason to not use his stuff. I'd just rather run a "logical" DNS server like BIND, with a daemon, and a set of config files, which supports recent developments.
Troll, not troll, whatever. I'm tired. And I haven't posted for a while, so I figure Slashdot is "missing" me.

Re:Slow down there (1, Informative)

Kazin (3499) | more than 5 years ago | (#26052765)

Too bad it does support AAAA records and SRV records. Oh, and it has a set of config files. And works just fine.

It seems pretty logical to me, BIND is the one that seems backward. Maybe it's because I've been using it since 1995.

Re:Slow down there (1)

Just Some Guy (3352) | more than 5 years ago | (#26053013)

Too bad it does support AAAA records and SRV records.

No it doesn't. The official release, version 1.05 on his site [cr.yp.to] , doesn't support serving AAAA records and contains no mention of SRV at all. Go ahead: download and grep for it. There are patch unofficial versions that do all sorts of things, but by that standard any software supports just about anything.

Re:Slow down there (0)

HairyCanary (688865) | more than 5 years ago | (#26053387)

You are mistaken. Go to tinydns.org and read -- it's covered on the main page. Hint: "djbdns supports all possible resource record types with a generic syntax."

Re:Slow down there (4, Informative)

Just Some Guy (3352) | more than 5 years ago | (#26053679)

You are mistaken. Go to tinydns.org and read

I did. That isn't the official version of djbdns; it's a fork. Furthermore, note that even the "enhanced" fork fails to support such fundamental necessities as IXFR. You can hobble together some hackish workalike with rsync - assuming you have control over both servers. Good luck getting a registrar or any other free/cheap DNS hosting service to go along with that arrangement.

As always, djbdns is probably OK as long as you don't need any of the (common) features it doesn't support. If you do, it stops looking so clever.

Re:Slow down there (0)

Kazin (3499) | more than 5 years ago | (#26053767)

I have an off-site free site doing my secondary DNS and have no problems at all using djbdns as my primary. DJB includes his axfrdns program for this purpose.

And yeah, the generic record types is what I was referring to. You seem to not have read the documentation.

Again, been running it without any issues for 13 years.

Re:Slow down there (2, Insightful)

Just Some Guy (3352) | more than 5 years ago | (#26054867)

IXFR, not AXFR. IXFR is sort of like a journal playback. Suppose you have 100,000 records in a zone. With AXFR, if you change one record, you have to retransmit all 100,000 records. With IXFR, you transmit the change alone. The suggested workaround is to use rsync or some other synching mechanism, but with djbdns that'd mean that rsync has to sync a directory with 100,000 files. Again, with IXFR you'd just replay the journal.

Re:Slow down there (2, Informative)

mrsbrisby (60242) | more than 5 years ago | (#26055083)

I did

You need to work on your reading comprehension then.

DJBDNS supports all RR types, by way of generic RR support. See near the bottom of this page [cr.yp.to] for details.

There is a series of patches that produce friendly syntax for tinydns-data, a single component of DJBDNS. This isn't valuable to large sites who don't source with tinydns-data's built-in format.

Re:Slow down there (1, Informative)

profplump (309017) | more than 5 years ago | (#26053703)

Yes, it does. It supports arbitrary record types, something that even your precious BIND does not (even though the RFC says it should). Grep for "generic record" in the tinydns-data documentation.

It doesn't support IPv6 queries without a patch (not that software last updated in 2001 reasonably could), but it most definitely supports AAAA and SRV records -- I'm currently using it to serve both.

/ Feel free to not like DJBDNS, just pick technically valid reasons

Re:Slow down there (0)

Ice Station Zebra (18124) | more than 5 years ago | (#26054185)

You mistake lack of an easy syntax for AAAA and SRV records with non-support. How stupid.

Re:Slow down there (1)

jonaskoelker (922170) | more than 5 years ago | (#26054567)

Because Dan won't be happy unless he makes you install it in /crypto/strong/etc/librarees

He may be excentric, but I don't think he insists on spelling things wwong.

Re:Slow down there (1)

Just Some Guy (3352) | more than 5 years ago | (#26054911)

He may be excentric, but I don't think he insists on spelling things wwong.

I've never seen anyone else spell "/opt" as "/service".

Re:Slow down there (1)

mrsbrisby (60242) | more than 5 years ago | (#26055097)

"/service" is unrelated to "/opt".

"/service" is for a reliable init-based service manager. I believe Ubuntu's upstart can finally do all of the things supervise could do almost a decade ago.

"/package" serves a similar purpose for "/opt", except it has well defined semantics, where "/opt" does not.

Re:Slow down there (2, Informative)

Anonymous Coward | more than 5 years ago | (#26052511)

Note that ECC isn't a new Crypto Algorithm. Although it is newer than RSA, it's still over 20 years old. ECC is an IEEE standard, and has been standardized by NIST as well. It's also discussed in RFC 4492, and other RFC's as well. The only part that's novel in this treatment is the choice of a particular Elliptic Curve (similar to choosing an Exponent in RSA).

Re:Slow down there (4, Interesting)

lgw (121541) | more than 5 years ago | (#26052531)

Why is it necessary to use a new one when old ones have been demonstrated to be effective and secure?

He's pushing a new piece of software, not at all a new algorithm. In particular, Old-RSA-style product-of-primes encryption has been deprecated by the NSA for several years now, and shouldn't be used in any new software. Elliptical curve technology is one of the alternatives recommended by the NSA.

Bernstein may *be* an ass, but he's not *talking out of* his ass.

Industry coalitions are great, but this seems to be an attempt to create a new de facto standard controlled by a few large corporate interests

You've just described almost every successful engineering standard. As someone who has served on an international standards committee, let me say: the standard *is* what the vendors who control the market *do*, otherwise it's just a piece of paper. A useful and productive standards committee is formed when the few large corporate interests (who collectively have most of the market share in some space) get together and say "let's all agree to do things the same way".

Otherwise you end up with a meaningless standarded ignored by products that represents 90% of a market, like the early days of the HTML "standard". Wow, that's useful.

Re:Slow down there (4, Informative)

Twylite (234238) | more than 5 years ago | (#26052563)

ECC is not a new crypto algorithm. It has been around since 1985, it is will studied, and it is recommended for use in the US (NIST, NSA Suite B), in the EU (NESSIE project falling under the European Commission), and in Japan (CRYPTREC government project).

Bernstein has created a new curve for use with ECC; one that is better suited to the requirements of this particular application than other existing curves. He claims to have followed the appropriate practices in generating this curve -- that obviously needs to be verified by suitably knowledgeable experts.

The "existing algorithm" is RSA, specifically RSASSA-PKCS1-v1_5. There are more secure signature schemes available for RSA, e.g. RSA-PSS. In addition DNSSEC will use 1024-bit RSA keys as a compromise (to reduce transfer size and computational overhead) -- NIST recommendations are that 1024 bits are too short for any purpose.

DNS forgeries are already having a significant impact - keep your eyes on the security reports.

Re:Slow down there (4, Informative)

harlows_monkeys (106428) | more than 5 years ago | (#26052583)

This Bernstein guy is pushing a new crypto algorithm

No, he is not. He's using an old, well-tested, well-studied algorithm, generally believed among cryptographers to be more secure than RSA.

"Slow down there" not applicable to commenters (1)

Onymous Coward (97719) | more than 5 years ago | (#26052793)

I might recommend looking further into this "new crypto" business.

Here are a couple links in case they're hard to find:

  1. http://dnscurve.org/dnssec.html [dnscurve.org]
  2. http://dnscurve.org/crypto.html [dnscurve.org]

Just in case it's also hard to follow links, here's some selected text:

IEEE P1363 standardized elliptic-curve cryptography in the late 1990s. NIST standardized several elliptic curves following the P1363 recommendations. In 2005, NSA issued a new "Suite B" standard, recommending the NIST elliptic curves (at two specific security levels) for all public-key cryptography and withdrawing previous recommendations of RSA.

Re:Slow down there (1)

xrayspx (13127) | more than 5 years ago | (#26053265)

His elliptical curve is cryptographically secure, he even says so on his web page. And it would be the only DNS solution that will pay you $500 if you site gets hijacked.

Re:Slow down there (1)

jonaskoelker (922170) | more than 5 years ago | (#26054549)

Isn't the standard process to submit it to the IETF or similar organization to have it ratified first?

I believe the IETF wants to see two independent implementations before standardizing something. That's why the IP over Avian Carrier isn't an Internet standard, for one ;)

The may want to publish an informational RFC, though.

But it isn't SOP to write up a (semi)formal RFC as part of the discussion about how to solve any given problem. That's something you do once you want to set the solution in stone (or possibly something slightly softer).

Re:Slow down there (1)

mysidia (191772) | more than 5 years ago | (#26054575)

1. This Bernstein guy is pushing a new crypto algorithm. Why is it necessary to use a new one

It's not. One can't really say a custom algorithm has proven itself to provide the same level of security as a well-known algorithm. But it may be enough security to satisfy the need.

His reputation should not be a deciding or even motivating factor in the adoption of a new algorithm; Isn't the standard process to submit it to the IETF or similar organization to have it ratified first?

No, that's NOT the standard process, historically. Although it is becoming the path of more and more standards, and results in (IMO) standards that are "more correct", but also less useful, and often overcomplicated (with features that 75% of the community wouldn't ask for).

Countless standards were adopted elsewhere and put into wide use on the Internet long before the IETF formalized them with an RFC.

Traditionally the IETF has played a role, but some of the most popular things that are standards today like HTTP and DNS did not start as RFCs proposed to the IETF.

In fact, they started with an implementation and a simple usable spec.

Not a 5000 page RFC document discussing every possible issue and security consideration implementors might have.

The thing DJB's proposal has going for it (IMO), is that it is simple, extremely easy to implement, and gives 90% of sites exactly what they need.

DNSSec on the other hand is extremely complicated, has certain drawbacks, and many of its theoretical advantages come at great cost, compared to the minimal effort that would have been required to revise the protocol to provide the same practical advantages.

Practical advantages over non-DNSSec DNS are things like query responses cannot be spoofed or hijacked by a third party while in transit.

More theoretical advantages of DNSSec are things like... other legitimate DNS servers can't conspire to serve records the zone owner doesn't approve of, and fool the client.

Re:Slow down there (1)

mrsbrisby (60242) | more than 5 years ago | (#26055051)

I disagree. His reputation is the single most important motivating factor here. Vix et all produced this mess, have been whining about DNSSEC since 1993 and still haven't come up with a deployment plan, or a migration plan. DJB started with a system that was 100% compatible with DNS, instead of starting with a pipe dream.

Furthermore, when BIND and friends were vulnerable to these new attacks, DJB's software wasn't. Not just because he was lucky, but because he's a pedant who thought of similar attack vectors over a decade ago, and announced solutions to the BIND and namedroppers mailing lists- randomize port numbers, and don't accept answers to questions you didn't ask.

Needless to say, the BIND group had their "own" solution to those attack vectors, and I don't have to tell you how well those worked out.

What an idiot. (-1, Troll)

mmell (832646) | more than 5 years ago | (#26052333)

At just a moment when the internet at large needs to standardize on secure mechanisms, he has to gratuitously add another potential standard to the mix, increasing the difficulty of getting anything done.

If RSA were not considered computationally secure, I might applaud his intent to provide "a better mousetrap". However, since RSA is (to the best of my knowledge) still considered secure, his elliptical cryptography based version of DNS brings NOTHING of value to the table, and only serves to complicate what should otherwise be a reasonably straightforward proposition - that of migrating the internet's DNS servers to a secure DNS implementation.

Re:What an idiot. (1, Insightful)

Anonymous Coward | more than 5 years ago | (#26052403)

Yeah, who cares about improving both time and space efficiency of cryptography before any sweeping overhauls of a core internet service are performed? It's best to think about that afterwards.

You know, kind of an after... thought.

So you think RSA is broken? (1)

mmell (832646) | more than 5 years ago | (#26052831)

That seems to be the crux of his arguement against DNSSEC - that RSA is broken (or soon to be broken).

Okay, let me restate the problem - should we implement a mechanism which is already available and well understood, as well as generally accepted as secure (Mr. Berstein's assertion that RSA1024 is broken to the contrary), or should we implement it with a technology which can't be nearly as mature (not so much for its newness, but rather for its lack of broad acceptance/use)?

You're right - let's pick the shiniest technology on the shelf, we all know that elliptic curve encryption is faster, smaller and uncrackable, right? Uh, it is uncrackable, right? 'Cuz, you see, RSA1024 is uncrackable, or so I was once told. Now Dr. Berstein says it ain't, but elliptical curve encryption is. Funny thing - if RSA1024 is more than enough to secure my bank transactions, why wouldn't I trust it with my DNS queries?

Re:So you think RSA is broken? (0, Flamebait)

Anonymous Coward | more than 5 years ago | (#26053093)

Even if your bank is currently using a 1024-bit certificate, your browser and the underlying protocols support more than that. DNSsec doesn't. It's taken decades to get DNS crypto taken seriously, and it makes sense to do it once, instead of over and over again after serious compromises have occurred.

1024-bit RSA is considered deprecated by NIST as of 2010. In a couple weeks, it'll be 2009. That's not a very useful lifespan. Meanwhile, elliptic curve cryptography gets significantly more protection per bit -- not as much as a good symmetric cipher, but about half that. Like a symmetric cipher (and unlike RSA), it scales linearly with the number of bits you give it. NIST considers ~163-bit ECC as secure as 1024-bit RSA; if you give it 256 bits (like DJB's implementation), that's roughly equivalent to 3072-bit RSA. Not to mention, it can be computed more quickly and transmitted in less space.

After all, it's not like DNS servers have to answer thousands of queries a minute, or encode answers into a single packet, or anything like that. Nope. Nothing like that.

Re:So you think RSA is broken? (1)

DrSkwid (118965) | more than 5 years ago | (#26053499)

you've made yourself look cock on the internet, won't be the first time or last (for either of us :)

1024 bit RSA keys are too short and too long (1)

billstewart (78916) | more than 5 years ago | (#26054681)

We've known since the beginning that the security of RSA depends on the key length that computers can factor using the latest algorithms. 10-bit keys were always too short; 512-bit keys were cracked in 1999, and Shamir ("S" in "RSA") published work in 2003 suggesting that 1024-bit keys might be endangerable soon - they're probably fine for one-shot use on any individual message less important than planning the overthrow of a large government, and they're certainly fine for bank transactions, because the cost of breaking them far exceeds the amount of money in your bank account. But for a single long-term hard-to-change widely-used target, like the DNS root or .com, RSA1024 is pretty dubious. It's probably fine for signing www.yourdomain.com unless your domain is a large bank, but the protocol needs to support keys that are long enough for everybody, which means it needs to be 2048 or longer.

Unfortunately, DNS has fairly tight constraints on how many bits you can cram into a transaction, and 2048 bits isn't very practical. ECC uses much shorter keys, and while 160 bits isn't quite long enough, 255 appears to be really good, unless some improvement in the theory changes that.

But yeah, elliptic curve theory is a lot newer than factoring theory, and the risk of a theoretical breakthrough is a lot higher, though Moore's Law isn't much of a threat for a while. On the other hand, DNScurve uses ECC for transactions, not for long-term signatures, so there isn't the same One Big Target effect, since every DNS server is using its own different key.

Re:What an idiot. (0)

ccguy (1116865) | more than 5 years ago | (#26052523)

You might want to google him before calling him any name.

I'm aware of his reputation (0, Flamebait)

mmell (832646) | more than 5 years ago | (#26052713)

In this instance, he's just plain wrong.

A personal opinion, that. YMMV.

Re:I'm aware of his reputation (0)

raju1kabir (251972) | more than 5 years ago | (#26052861)

Doesn't seem like a very informed opinion (at least based on what you've written so far).

This change, if actually implemented, will be with us for a while and will place a significant burden on DNS providers. Better to do it right, than to go with the first thing that comes along.

Okay, you take your time with that. (2, Insightful)

mmell (832646) | more than 5 years ago | (#26052957)

Let me know when widespread adoption seems likely.

Let me know when widespread support is available.

This is one of those cases where theory and practice differ. In theory, I'd love to wait until some absolutely uncrackable/fast/compact/available technology makes securing DNS possible. In the interim, this isn't the time to go back to square one and start over.

Of course, since DNScurve will never need a successor, of course it'll be worth the wait. Obviously, DNSSEC will have a successor and so we should just not bother and stick with good ol' DNS until DNScurve has wide enough adoption to make migrating work.

Uh, given that DNSSEC has taken nearly a decade to get here, how long will it be for DNScurve?

In theory, there's no difference between theory and practice. In practice . . .

Re:I'm aware of his reputation (0)

Anonymous Coward | more than 5 years ago | (#26052871)

Actually you are just plain wrong.

1024 bit RSA shouldn't be implemented in any new software, even NIST says it isn't enough. 1024 bit is being suggested as a compromise because a more secure bit length or different RSA algorithm for DNSSEC would defeat one of the purposes of how DNS uses a _single_ UDP packet to convey it's information. Additionally the computational power required to sign those requests would increase significantly.

What's being proposed for DNSSEC is a bandaid at best and a power grab at worst.

Re:What an idiot. (2, Interesting)

CustomDesigned (250089) | more than 5 years ago | (#26053211)

If RSA were not considered computationally secure, I might applaud his intent to provide "a better mousetrap".

Since 1024 bit RSA used by DNSSEC is *not* considered computationally secure, I'm sure he'll appreciate your applause.

Also, his "hack" of encoding the key in NS records actually simplifies deployment and could also be used by DNSSEC (at the expense of long DNS server names - *really* long in the case of DNSSEC).

DNSSEC is pre-signed, and can be checked by a client even if a DNS cache is compromised. (If you already have non-forged keys from the root.) But this also means you effectively publish your entire zone.

DNSCurve protects transactions, and depends on secure caches. Clients have to run their own caching nameserver if they don't trust the ISP DNS. (Pretty much the case now.) But you can also continue to use secret names in your zones.

Re:What an idiot. (1, Flamebait)

C0vardeAn0nim0 (232451) | more than 5 years ago | (#26054167)

At just a moment when the internet at large needs to standardize on secure mechanisms, he has to gratuitously add another potential standard to the mix, increasing the difficulty of getting anything done.

that's because he's an egomaniac who'll not be happy until the internet becomes DJBnet, all based on DJB/IP, with DJBDNS, DJBML and the like

I have deep respect for DJB (2, Insightful)

Anonymous Coward | more than 5 years ago | (#26052367)

...but he is not seriously attempting to establish a different protocol all by himself, is he? The root server administrators would never switch, and without root support, there is no place to anchor the hierarchy. He might have had a chance earlier in the standardization phase, but now that there are live DNSSEC domains, his chances are practically zero.

Re:I have deep respect for DJB (1, Interesting)

alta (1263) | more than 5 years ago | (#26052443)

I have deep disdain for djb. Every time he finds a problem with the internet, and boy does he find them, his solution is to write his very own version that he maintains control of. Don't like something about HIS version? Screw you, because it hasn't had any security bugs since it was ever released. And screw the fact that it hasn't been updated, and therefore hasn't picked up a single new feature, in 10 years. And yes, I completely think he's trying to build a completely new solution to the problem. Have you ever known him to FIX anything? No, he scraps it all, writes something small and feature poor, but secure due to simplicity. I don't think anyone's found a overflow in notepad either djb! He just builds the alternatives to feed his ego. please people, don't feed the animals.

Re:I have deep respect for DJB (0)

Anonymous Coward | more than 5 years ago | (#26052571)

I think you will find few people who disagree with you regarding DJB's stance on cooperation. However...

secure due to simplicity

That is also known as the KISS principle, which is believed to be a major reason for the success of Unix. You were saying?

Re:I have deep respect for DJB (1)

cjfs (1253208) | more than 5 years ago | (#26052577)

I don't think anyone's found a overflow in notepad either djb!

I wouldn't doubt it - try typing in "this app can break" (without quotes) or another string in that format. Save the file and reopen.

Re:I have deep respect for DJB (0)

Anonymous Coward | more than 5 years ago | (#26054317)

I can break this app!

Don't throw the baby out with the bathwater (4, Insightful)

EdwinFreed (1084059) | more than 5 years ago | (#26052627)

I'll say this for Dan - he is often quite good at analysis and finding problems. But after watching a huge fight between him and the authors of the delivery status notification format for email, with the result that positions became completely polarized and nobody succeeded in convincing anyone else of the merits of their respective ideas, I decided the best way to deal with him is to listen to his criticisms, evaluate them carefully, and if it makes sense to address them, do so. But attempting to engage in a meaningful discussion with him is a waste of time - he gets angry way too easily and starts throwing all sorts of nasty invective around, and the result is almost always that the interaction spirals straight down the crapper.

Re:I have deep respect for DJB (1)

MichaelSmith (789609) | more than 5 years ago | (#26052653)

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

George Bernard Shaw

Re:I have deep respect for DJB (1)

abigor (540274) | more than 5 years ago | (#26052943)

I think he's released most of his stuff into the public domain, has he not?

Re:I have deep respect for DJB (1)

Ice Station Zebra (18124) | more than 5 years ago | (#26054291)

Do you know djb? If not, why do you judge him so? He has very good ideas and follows through on his promises. The people who complain that they can't modify his software, roll it up and distribute it are the ones who gave us things like broken ssh in debian.

Re:I have deep respect for DJB (1)

Wdomburg (141264) | more than 5 years ago | (#26052455)

After the smashing success of Internet Mail 2000 [cr.yp.to] he would be a fool not to!

DNSCURVE doesn't work... (2, Informative)

nweaver (113078) | more than 5 years ago | (#26052433)

The argument against DNSSEC is that its not needed for securing DNS: that the in-path adversary can F@#)* the final app anyway, unless the final app never trusted the DNS name.

However, there is one key adversary which is in-path on the naming but NOT in path on the data: the DNS recursive resolver. We have seen resolver settings changed by malcode, ISPs wildcarding NXDomain errors, and even DNS service providers (like OpenDNS) man-in-the-middle'ing google!

DNSSEC addresses this adversary, because it is a data integrity protocol. DNSCurve does not: it explicitly trusts the recursive resolver and offers NO security guarentees against this very serious adversary.

Fortunatly, nobody in the DNS world cares about DNScurve, so it will probably just go away.

In fact, DNScurve really shuold be restructured to be a competitor for DTLS, a lightweight datagram communication confidentiality & integrity protocol with a much lower key-setup latency.

Re:DNSCURVE doesn't work... (1)

foom (29095) | more than 5 years ago | (#26052635)

DNSSEC addresses this adversary, because it is a data integrity protocol. DNSCurve does not: it explicitly trusts the recursive resolver and offers NO security guarentees against this very serious adversary.

Okay, so where can I find a patch to make glibc's stub resolver verify DNSSEC signatures, so that I can be pretected from my recursive resolvers? DNSSEC has been around for nearly a decade: surely someone's implemented this by now?

Re:DNSCURVE doesn't work... (1)

nweaver (113078) | more than 5 years ago | (#26052721)

I think you can use Unbound as a stub resolver.

Re:DNSCURVE doesn't work... (2, Informative)

foom (29095) | more than 5 years ago | (#26053065)

You might be interested in this thread:
https://lists.dns-oarc.net/pipermail/dns-operations/2008-May/002736.html [dns-oarc.net]
where Paul Vixie recommends that nobody should ever deploy a stub resolver that supports DNSSEC, but instead use TSIG to talk to the recursive resolver. Which really makes DNSSEC's security characteristics look very much like DNSCurve. The only difference being that DNSSEC is hugely more complex to use and implement.

Easy (1)

wsanders (114993) | more than 5 years ago | (#26052781)

Run BIND with DNSSEC and point your resolver to localhost. That's actually the way God, or whoever, intended it to be run.

In practice, most organizations will run a local recursive "trusted" BIND server with DNSSEC behind a firewall just like they do now. Eventually substantial numbers of ISPs will do so too. No one does not because something like 0.01% of .com domains are DNSSEC-ified.

It should be no more difficult that setting up HTTPS was, of course it only took 10 years or so to get that out of the hands of the security high priests.

I am sure DJB's proposal holds merit. But engineering is also about what can be done. If you are paranoid about the NSA redirecting your domain to a porn site, you probably should be worrying about far worse things instead.

Re:Easy (1)

foom (29095) | more than 5 years ago | (#26052881)

But if I'm running BIND on my local machine, it would be just as secure under the DNSCurve proposal as with DNSSEC.

If my organization has a central, recursive, trusted DNS server, then I'm just as secure under DNSCurve as with DNSSEC.

The only place DNSCurve loses if if I have a stub resolver pointing to an *untrusted* recursive resolver on another host, instead of running my own recursive caching resolver. So maybe I just shouldn't do that...

Some bad ideas won't go away... (2, Interesting)

damn_registrars (1103043) | more than 5 years ago | (#26052857)

We've discussed before just how terrible of an idea it is to start selling gTLDs and let the spammers and con artists start running the entire show.

And there have been more than a few objections [icann.org] on the list about selling gTLDs, as well.

Yet apparently ICANN is set to go ahead with it, anyways.

Funny, most organizations would be opposed to taking action that reduces their own authority (which is one obvious effect of selling gTLDs) - but of course with the prospect of seeing a small, immediate infusion of cash from the process, ICANN is all over it.

Funny, in the name of profit, we are moving towards less regulation, less control, less accountability, and more resemblance to lawlessness.

Unfortunately once they make this mistake there is no going back. We'll have unscrupulous registrars selling to criminals all over the world and we'll have zero control over the domains that turn profit on (counterfeit) drugs, (pirated) software, (counterfeit) fashion goods, (stolen) personal identification and the like.

Re:Some bad ideas won't go away... (1)

nsaneinside (831846) | more than 5 years ago | (#26053323)

Funny, in the name of profit, we are moving towards less regulation, less control, less accountability, and more resemblance to lawlessness.

Hey, it "worked" for the economy until just a few months ago.

Why not just default to TCP for DNS resolving? (1)

Ash-Fox (726320) | more than 5 years ago | (#26053275)

Why not just default to TCP for DNS resolving over UDP?

It solves the problem.

Re:Why not just default to TCP for DNS resolving? (1)

CyberTech (141565) | more than 5 years ago | (#26053559)

Why not just default to TCP for DNS resolving over UDP?

It solves the problem.

Because it increases packet count for every dns request by 8 packets minimum. There's a ton of dns traffic.. that adds up. Not to mention system overhead in the connection establishment on the higher usage(think root) servers.

Re:Why not just default to TCP for DNS resolving? (1)

Just Some Guy (3352) | more than 5 years ago | (#26053725)

There's a ton of dns traffic.. that adds up. Not to mention system overhead in the connection establishment on the higher usage(think root) servers.

In fairness, you could mitigate that two ways:

  1. Get people to use their upstream resolvers so that caching actually works.
  2. Create a sub-protocol to send multiple queries over the same TCP connection so that the connection and teardown can be amortized over many queries.

Re:Why not just default to TCP for DNS resolving? (1)

silas_moeckel (234313) | more than 5 years ago | (#26054697)

Unfortunately that's not going to work well.

Many of the current top level DNS servers are running via anycast meaning many different servers have the same IP and the internet routes to the nearest one. This works very well since it scales but it can not handle TCP it has to be one packet in n packets out otherwise the first and second packets might night reach the same server.

Keeping state for all the upper level recursive servers would be a nightmare and who gets to choose who gets to be an upstream. This sounds like usenet it works well until it got big then it became to much trouble for ISP's to support well.

Small delays in DNS resolution time can cascade into big delays to the end user. Granted I see way to many foreign cnames and HTTP 302 redirects etc all causing delay in to many sites. So even tacking on TCP setup latency.

Many firewalls are improperly configured not to allow DNS to use TCP now, changing the response much or even sending more UDP packets may cause gear to have fits.

Overall neither one is perfect

What's DNSSec going to cost us? (4, Insightful)

ErikTheRed (162431) | more than 5 years ago | (#26053869)

DNSSec uses hierarchical signature chains (similar to SSL). So, um, they're going to sign our keys out of the goodness of their hearts, right? Oh, they're not? So the real reason that these registrars are running around with giant erections over DNSSec is because it's a whole new revenue stream for them? Makes sense now.

Not that I'm against anyone making a buck, but if there's a decent way to accomplish the same goal without having another set of keys to sign (and having to update ZSKs every freaking month) then I'd be happy to give it a fair shake. It's not like most admins have all sorts of free time to deal with additional overhead.

Another point in favor of DJB - Yes, he's abrasive, but when was the last time tinydns needed to be updated because of a security vulnerability? Now compare with BIND and Windows Server. We can argue his quirks all day long, but dude does have hands down the best record (pun semi-intended) when it comes to DNS security.

Will either DNSSEC or Curve solve this prob? (2, Interesting)

JSBiff (87824) | more than 5 years ago | (#26054215)

I've thought before that it would be useful, if I'm using my laptop on a public WiFi network, to be able to use a pre-designated, trusted DNS Server (so that the public network's DNS Server can't send me to bogus servers).

It would be a nice feature if I could have my computer cache the public key of my ISP's DNS Server (or maybe OpenDNS; the point is, some DNS Server *I* trust, instead of a random DNS server), then, no matter what network I connect to, always use that DNS Server, with the DNS packets being signed by the trusted server, so I know they are really from that server. (I realize I can use OpenDNS pretty much anywhere, but I don't know if there is anything preventing the local network from doing a MITM attack?)

It might also be useful, for this type of system, if my computer can authenticate to the ISP DNS Server (because they might not normally allow DNS requests from outside their own network, but if there were a specified authentication mechanism as part of the standard, they might allow me to roam if I authenticate)?

Maybe the best answer is to just use the VPN capability on my home router to always VPN to that router, which will then use my ISP's DNS. Until DNSSec is implemented widely, that's the best solution for now, anyhow, I think.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>