Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

BIND Still Susceptible To DNS Cache Poisoning

Soulskill posted about 6 years ago | from the still-in-a-bind dept.

Networking 146

An anonymous reader writes "John Markoff of the NYTimes writes about a Russian hacker, Evgeniy Polyakov, who has successfully poisoned the latest, patched BIND with randomized ports. Originally, the randomized ports were never supposed to completely solve the problem, but just make it harder to do. It was thought that with port randomization, it would take roughly a week to get a hit. Using his own exploit code, two desktop computers and a GigE link, Polyakov reduced the time to 10 hours."

cancel ×

146 comments

Sorry! There are no comments related to the filter you selected.

Power DNS Recursor.. (0)

Anonymous Coward | about 6 years ago | (#24536715)

Much better program, and faster recursion..

wheres the problem?

Re:Power DNS Recursor.. (1, Informative)

cortana (588495) | about 6 years ago | (#24536953)

$ apt-cache -n search power dns | wc -l
0

Re:Power DNS Recursor.. (4, Informative)

shallot (172865) | about 6 years ago | (#24537133)

% apt-cache -n search pdns-recursor
pdns-recursor - PowerDNS recursor

Granted, it *is* actually missing on several architectures because of some unimplemented system calls, but that shouldn't bother too many people.

Re:Power DNS Recursor.. (4, Informative)

bconway (63464) | about 6 years ago | (#24537625)

Consider reading the links in the article. Obfuscation isn't a fix.

Article says, that DJBDNS does not suffer from this attack. It does. Everyone does. With some tweaks it can take longer than BIND, but overall problem is there.

Eat my goatse'd penis! (-1, Troll)

Anonymous Coward | about 6 years ago | (#24536741)

cue the Bud Light voice (-1)

Anonymous Coward | about 6 years ago | (#24536747)

here's to you Mr DNS Cache Poisoner man

Oh no terminology (1, Funny)

Anonymous Coward | about 6 years ago | (#24536757)

Russian hacker, Evgeniy Polyakov

a Russian physicist

Which one which one aaaaaarhhhhh I'm so confused.

Re:Oh no terminology (1)

urcreepyneighbor (1171755) | about 6 years ago | (#24537283)

Aren't all physicists hackers?

Re:Oh no terminology (1, Funny)

Anonymous Coward | about 6 years ago | (#24537357)

No, they are just trying to reverse engineer gods physics engine.

Re:Oh no terminology (1)

madboson (649658) | about 6 years ago | (#24538397)

Actualy by /. language standards physicists are hackers. By the rest of the worlds standards, not really. Just read their code and its obvious :)

Re:Oh no terminology (1)

Nerdfest (867930) | about 6 years ago | (#24537531)

Either one explains why he could generate the bad entries so fast ... he was rushin' ...

Old, obvious, but just so damn neccessary

BIND (0)

Anonymous Coward | about 6 years ago | (#24536783)

BOUND

Re:BIND (4, Funny)

MrNaz (730548) | about 6 years ago | (#24537131)

I think you mean B0wnd

IPv6 could solve this! (4, Insightful)

jamesh (87723) | about 6 years ago | (#24536787)

With IPv6, you would have enough source addresses to add that to the 'random pool' too. Another 64K addresses would make it harder to hack.

Does anyone else think that maybe we are approaching this problem the wrong way?

Re:IPv6 could solve this! (1)

Tony Hoyle (11698) | about 6 years ago | (#24536947)

The source addresses would be the same though - there are only a limited number of DNS servers and it's not hard to sniff a link and work out what the common ones are... so you're not adding anything, just creating a situation where the average home user can't actuallly use your DNS server.

Re:IPv6 could solve this! (3, Insightful)

diegocgteleline.es (653730) | about 6 years ago | (#24537019)

Another 64K addresses would make it harder to hack.

You said it, it'd make it harder, but not impossible, specially with hardware geting faster every year.

Re:IPv6 could solve this! (3, Informative)

Niten (201835) | about 6 years ago | (#24537177)

Does anyone else think that maybe we are approaching this problem the wrong way?

Yes, the wrong way being tacking on extra transaction ID space by means of fragile kludges such as random source port numbers and, possibly, random IPv6 addresses.

It will require a lot more effort, but the right way to solve this problem is by improving the protocol itself. That may mean putting a much larger transaction ID field in the packets, where it cannot be mangled by NAT devices. Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records. But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.

Re:IPv6 could solve this! (1)

vrt3 (62368) | about 6 years ago | (#24537679)

I haven't studied this issue in detail, but wouldn't it help a lot to use TCP instead of UDP? Then you don't even need transaction ID's; the transaction is simply the TCP connection.

Re:IPv6 could solve this! (1)

Paul Jakma (2677) | about 6 years ago | (#24537711)

Yep.

Re:IPv6 could solve this! (2, Informative)

Paul Jakma (2677) | about 6 years ago | (#24537939)

Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records.

Good post. Forgive me for focusing in on this one point and nitpicking it.. ;)

0. Glue used to have a specific meaning: records configured in a parent to help delegate a zone. You (and many people reporting on the current flaws) seem additionally to use it to refer to "additional answers" in DNS replies. While such answers often are glue records, they're not quite the same thing. Least, it didn't use to be. Perhaps the meaning has moved on, but I'll use "glue" in the stricter sense.

1. That's essentially already the case, for in-zone delegations (ie delegating example.com to a name in example.com. like ns.example.com.). The authorative server must be configured with glue, and it must return that glue in the additional answer section for anyone to be able to query ns.example.com.

(Interestingly, DJB has long had a page arguing that "out-of-bailiwick" delegations are bad [cr.yp.to] , amongst other things).

2. Resolvers mustn't treat glue from the delegator as authorative anyway. Resolvers are only supposed to believe additional answers that are in-bailiwick (ie you'll only believe an additional record for/in example.com. if it came from a DNS server you know to be authorative for example.com., e.g. cause you just queried it for a name in example.com.)

3. The problem here is not inherent to glue/additional, but in knowing whether a reply is authorative or not. The only good way to fix this is to secure the DNS chain of trust, either through near-ubiqutious deployment of IPSec+PKI for DNS servers (ha!) or a PKI inside DNS.

Re:IPv6 could solve this! (1)

Znork (31774) | about 6 years ago | (#24538663)

but the right way to solve this problem is by improving the protocol itself.

Which, if one reads various proposals to do just that, appears to be hampered by the group that thinks we should let the old DNS protocol be crap until people adopt, tada, DNSSEC.

But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.

I'm fine with DNSSEC. As long as I get to have the root keys, m'kay?

In the end, I think the trust issue is the killer and final showstopper for DNSSEC. Until DNSSEC is reengineered to solve the political issues in ways palatable to all parties (ie, scrap the third party hierarchial trust) I doubt it will get as widely deployed as would be necessary.

Meanwhile, adding a much larger transaction ID would make things safer and leave more time for fixing DNSSEC.

Re:IPv6 could solve this! (2, Informative)

Antibozo (410516) | about 6 years ago | (#24539067)

The third party hierarchical trust you disdain is one of the primary benefits of DNSSEC, because DNSSEC can eventually replace certificates for distribution of public keys. Currently, the only PKI we have is from a third-party non-hierarchical trust—the CAs—who are really not that trustworthy. DNS, however, is already hierarchical, and it makes a lot more sense to use a hierarchical system of trust—the same system in fact—to validate it. Do you really think having hundreds of trust anchors makes more sense than having a single trust chain?

What hampers DNSSEC, more than anything, is all the FUD about it. It's really not difficult to implement, and there are tools for nameserver operators to use. If people would start actually practicing signing domains, even without the trust chain, a lot of their fears would be allayed. What is technically standing in the way is the failure to plan for it on the part of the major registrars, who need to provide a mechanism for signing the DS records. Of course, a lot of those those registrars also happen to make a lot of money on the side as CAs, and one of them in particular also operates the .com and .net TLDs. You do the math—who benefits the most both from FUD about DNSSEC and from the insecurity of traditional DNS?

People need to set aside their paranoia about DNSSEC and understand that the worst case, most fantastical, trust violation scenarios for DNSSEC are still better than the status quo. And the best case is that everyone ends up with ubiqitous PKI with no per-unit cost. You could securely publish all your public keys, ssh host keys, personal public keys, etc. for the simple cost of getting a domain. If people can start seeing the possibilities for a single distributed, redundant, hierarchical, secure network database, there will be more community pressure on the registrars to get with the program. ICANN already plans to have a signed root by the end of the year and there are several ccTLDs that are already signing their zones. What excuse do we have for not even having a plan for .com? It's infuriating that we are facing the current situation, after all these years, and people are still questioning whether DNSSEC is a good idea.

Re:IPv6 could solve this! (1)

TubeSteak (669689) | about 6 years ago | (#24537179)

Does anyone else think that maybe we are approaching this problem the wrong way?

Of course 'we' are.

Making something harder to exploit != fixing the exploit.

Re:IPv6 could solve this! (1)

Zocalo (252965) | about 6 years ago | (#24537223)

Does anyone else think that maybe we are approaching this problem the wrong way?

No, although I think that quite a few people may have the wrong end of the stick. I got the distinct impression that while it's still a good idea, using random source ports wasn't intended to be THE fix for the problem. Rather that it was just a generic, vendor neutral workaround to enable people to have a chance to secure themselves against the immediate threat without revealing enough information to Black Hats to exploit the issue. A more permanent solution, that might otherwise have entailed revealing critical information about the vulnerability to those wishing to exploit it through a diff and code analysis, can now be worked on and applied in subsequent patches.

To use the horse and barn analogy:

  • Unpatched server: The horse is looking curiously out of the barn, or has already bolted.
  • Patched server (port randomization): The door is closed, but the horse could still potentially kick it open and bolt.
  • Patched server (final fix): The door is closed, and bolted. Until the next time.

Re:IPv6 could solve this! (1)

mikkelm (1000451) | about 6 years ago | (#24537289)

I haven't been too far into the technical aspects of this issue, but from what I gather, it is related to brutally "predicting" the source ports used for recursion, and injecting fraudulent responses?

It would generate more traffic, sure, but wouldn't an immediately obvious solution be to demand multiple confirmatory replies to recursion, each request using a different randomisation algorithm for the source port used?

Re:IPv6 could solve this! (1)

cheater512 (783349) | about 6 years ago | (#24537349)

Yeah just make the transaction id 64bits. Fixed.

Or go with the whole dnssec system.

Why do people still use BIND? (1, Insightful)

Mr.Ned (79679) | about 6 years ago | (#24536803)

Why do people still use BIND? It has a track record of security vulnerabilities almost as long as Sendmail's.

Re:Why do people still use BIND? (2, Funny)

PJCRP (1314653) | about 6 years ago | (#24536829)

Because most Networkers engage netorking-BDSM in regular practice?

This isn't a BIND problem. (5, Informative)

CustomDesigned (250089) | about 6 years ago | (#24536855)

This has nothing to do with BIND vulnerabilities. DJdns, or whatever you feel is more secure, has exactly the same problem. It is a protocol weakness. The article mentions BIND only because it is the reference implementation for DNS.

The most interesting idea I've seen is to use IPv6 for DNS [slashdot.org] . The oldest idea is to start using DNSSEC.

Re:This isn't a BIND problem. (0)

Anonymous Coward | about 6 years ago | (#24536955)

As far as I understand it, the handling of glue records could be much stricter, which would make this exploit infeasible. There is no reason to accept glue information which you already have and is not expired. The opportunistic acceptance of glue records is what makes the current attack possible.

Re:This isn't a BIND problem. (1, Interesting)

Anonymous Coward | about 6 years ago | (#24537061)

This has nothing to do with BIND vulnerabilities. DJdns, or whatever you feel is more secure, has exactly the same problem. It is a protocol weakness. The article mentions BIND only because it is the reference implementation for DNS.

The most interesting idea I've seen is to use IPv6 for DNS [slashdot.org] . The oldest idea is to start using DNSSEC.

You sir are wrong. DJBDNS and PowerDNS are both immune to this cache poising. Yes it is in the protocol, but the protocol is a minimum standard to follow. DJB put in features which allows the cache to only come from defined servers, etc. It is immune to this attack.

Re:This isn't a BIND problem. (3, Insightful)

CustomDesigned (250089) | about 6 years ago | (#24537205)

Since the basis of the attack is spoofing server IPs, how does DJBDNS detect spoofed packets? "only come from defined servers" is useless when the packets are spoofed. It helps, of course, to not accept new glue records whenever they appear, but keep existing ones until they expire. But this just makes the attack take a little longer.

Re:This isn't a BIND problem. (5, Informative)

Anonymous Coward | about 6 years ago | (#24537273)

The basis of the attack is to include "extra" information in a forged response to a query for a non-existent host. Bind trusts that extra information and other DNS servers only pay attention to that information if it falls under certain strict rules.

I ask for aaaae3fcg.bankofamerica.com and also send 100,000 responses to that query to that same recursive DNS server, that all say something to the effect of "a record aaae3fcg.bankofamerica.com = bah, also look to 666.666.666.666 for anything else related to bankofamerica.com. Oh, and cache this until the sun goes dark"

Nobody asks Bind to believe the part about THE REST OF THE WHOLE BLOODY DOMAIN in the response for a single record in the domain. No other servers cache that information.

That bind also used non-random ports made it a 5 minute attack over a fast link, instead of a 10 hour attack. That in the past bind used bad random numbers for the transaction ID made it a 30 packet attack...

Who's the fanboy now?

Re:This isn't a BIND problem. (2, Interesting)

Anonymous Coward | about 6 years ago | (#24537293)

It does not make the attack take "a little longer". It makes cache poisoning take as long as it took before the new attack method. If you only get a chance to poison the cache once whenever the cache purges the target record, then you have to guess the transaction ID correctly on the first try. The new thing about the current attack is that you get as many tries as you want at guessing transaction IDs and port numbers. That only works because servers allow glue to replace already cached records.

Port randomization makes it less probable that you guess right, but since you can guess hundreds of times per second, you'll eventually succeed at poisoning the cache. If the server only accepted glue records when it doesn't already have the information cached, then you could only guess once per TTL.

Re:This isn't a BIND problem. (3, Informative)

Anonymous Coward | about 6 years ago | (#24537585)

Not all dns servers cache the glue records beyond that transaction. Those that don't *cache* the glue are not vulnerable to this attack.

Re:This isn't a BIND problem. (1, Informative)

Anonymous Coward | about 6 years ago | (#24537527)

It doesn't. It simply ignores answers to questions it didn't ask.

That makes it extremely unlikely (even in the face of larger and faster computers and network links) with it being made less likely by simply increasing your TTL.

Re:This isn't a BIND problem. (0)

Anonymous Coward | about 6 years ago | (#24537649)

Incorrect. Read the links in the article.

Re:Why do people still use BIND? (-1, Flamebait)

Minwee (522556) | about 6 years ago | (#24537075)

You're right. If only people would switch to MacOS they would never have to worry about these kind of problems.

You Will Never Solve This Problem! (4, Insightful)

segedunum (883035) | about 6 years ago | (#24536817)

I might not have one of the lowest Slashdot IDs around, but I am absolutely astonished at some peoples' astonishment over this. DNS, by definition, is all about trusting the forwarders you are using or other DNS servers you are caching from and trusting the DNS server you use from there. That's where the problem is, so if people are shouting and screaming about trust now then it's all a bit late.

If your DNS server says that slashdot.org resolves to something other than 216.34.181.45 then that's where you're going to end up. There are also legitimate reasons why someone might want to do something like that, and it is part of the inherent flexibility that has made the internet and its technologies as ubiquitous and as well used as they are. No one said that there weren't downsides. If you locked everything down in the manner that some idiots will inevitably now talk about, shouting and squealing about financial institutions, then I'm willing yo bet that you will lose a good portion of the flexibility that makes the 'internet' actually work on a wide scale.

Re:You Will Never Solve This Problem! (1)

Timothy Brownawell (627747) | about 6 years ago | (#24536949)

This isn't about evil servers. It's about impersonating servers by spoofing their address, and about how the passwords build into the question/response packets aren't long enough to prevent this.

Re:You Will Never Solve This Problem! (2, Interesting)

gruntled (107194) | about 6 years ago | (#24537281)

Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?

Incidentally, when is somebody going to throw the fact that US banks have completely ignored the two-factor authentication requirement (part of the Patriot Act, I believe; maybe we should start sending *bankers* to Gitmo and see if *that* gets their attention) back at the finance industry when they start to squeal?

Re:You Will Never Solve This Problem! (1)

Stellian (673475) | about 6 years ago | (#24537887)

Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?

No, that's not the real issue. Two factor authentication does not solve the problem of DNS poisoning: the user will enter the one-time password into the fake site, which in turn will log in the real site and transfer one million $ to Nigeria.
SSL does not solve the problem of DNS poisoning in a practical sense: it only works if the user opens a https:/// [https] shortcut; the large majority of users that type "paypal.com" in the address bar, will not observe that the fake PayPal site they are seeing failed to redirect them to a SSL connection.
The only thing I can think of that really plugs any DNS vulnerability is a smart card / USB token type of device that does it's own verification of the remote website's credentials before disclosing login information.

Re:You Will Never Solve This Problem! (1)

houghi (78078) | about 6 years ago | (#24537323)

The flexability of DNS and the downsides of DNS are changing. The thing is that up till now you could asume that the answer was correct in the majority of the time. In the HUGE majority of the time.

This caused the downsides to be minimal. The downsides where mostly due to people forgetting a training dot or mistyping an IP adress. Those are acceptable downsides when compared to the upsides of the fexbility.

However when the downsides become that you can not know wether or not you can trus the outcome, it then becomes a problem.

Think of people using a telephone. This has the upside of flexability. You can dial any number. The downside is that you can mistype a number or people change numbers. However in the majority you trust to get the person you intended to dial.

If, by malicious attempts, you have no idea where you are calling to anymore, then it becomes a problem. What if you dial, get connected to a paynumber for 5USD per minute who then puts you through to your intended person.

Would you still say 'hey, everybody knew there were downsides' or would you say that we need to chane it so that this never happens again?

OK, perhaps a bit to strong, what if any phonecompany could connect you to their system, without you even knowing it. Suddenly you pay more when you call. Yep, that has happend and the situation is changed now.

Re:You Will Never Solve This Problem! (3, Insightful)

boto (145530) | about 6 years ago | (#24537811)

I wonder why the parent is modded Insightful. You don't seem to have gotten the problem.

The problem is not the servers being able to redcirect you to a different address, but the fact that any person (not only the people that control the servers you query) can make you server direct people to anywhere.

The problem is not about trust, but not being able to make sure who you are really getting a message from. You can't even have a trust problem if you are not sure who is talking to you.

Re: You Will Never Solve This Problem! (1)

nothings (597917) | about 6 years ago | (#24538167)

You're wrong; the problem is trivially solveable. For example, 128-bit transaction IDs would pretty much solve this particular scenario.

Unfortunately that requires a protocol change, which is a hard social problem. Adding 16 bits of source port randomization didn't require a protocol change, and they thought it was good enough. But maybe it wasn't (this particular demonstration is a little too laboratory-science for me; the flood of wrong responses would probably turn into a visible DoS attack in the real world). If it wasn't, though, no, it's not hard to fix on a technological front.

To a rough approximation, DNS is about trusting the servers the DNS system is configured to talk to. Other servers are only "trusted" about the data they're known to be responsible for. There is no fatal flaw to that design, as long as the DNS servers can't be spoofed. As long as a given DNS server always initiates "conversations" with the trusted servers, and uses a sufficiently large random transaction ID, and that conversation is private, the data it gets back is perfectly reliable. (DNSSEC attempts to tackle the problem from a different direction, by making sure all reports from the trusted servers can be verified as really being from them, which allows you to give up all three of those requirements.)

In other news, water is wet... (0)

Anonymous Coward | about 6 years ago | (#24536831)

The effectiveness of the attack in the presence of port randomization is equal to the amount of tries you can make per second.

On a GigE, port randomization has a much lower effect, just because you are throwing even more packets per second at the target server.

GigE (2, Interesting)

AndIWonderIfIWonder (718376) | about 6 years ago | (#24536845)

It seems that this only works so quickly because he had 2 machines connected to the server via GigE. Which I would guess means most DNS servers can't be poisoned like this.

Re:GigE (1)

Tony Hoyle (11698) | about 6 years ago | (#24536959)

Given a setup like that you could poison just about any protocol unless it was using SSL... anything that has a two way conversation expects replies and you can inject packets into it by getting there 'first.

TBH though given that setup I'd just respond to ARP requests for the router and intercept the entire traffic flow. DNS poisoning not required.

Re:GigE (1, Interesting)

Anonymous Coward | about 6 years ago | (#24537859)

Most people are failing to realize that with an attack window of 10 hours the abilities of detection, disposal and prevention of repetition are a LOT higher for a skilled system administrator.

Anybody keeping an eye on their traffic requests will notice the immense spike in volume, coming from one/paired IP Addresses, and then act upon.

This goes back to a more global problem, it's not ONLY just the DNS protocol's implementation, but also still falls on administrators to perform traditional roles or to set up automated programs to handle this for them.

On a side note, it would probably serve the DNS Server software authors to add features to detect port randomization attacks and categorize and log them, if say x number of requests happen in y timeframe, or z pattern is matched against w base.

I'm safe, in my ADSL utopia (1)

caluml (551744) | about 6 years ago | (#24536857)

So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...

People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.

Re:I'm safe, in my ADSL utopia (1)

AndIWonderIfIWonder (718376) | about 6 years ago | (#24536881)

So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...

People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.

Unfortunately most people on ADSL don't run their own name server, and instead use their ISPs nameserver. Hopefully not too many people will have GigE access to the ISPs nameserver so this attack probably won't work anyway.

Re:I'm safe, in my ADSL utopia (1)

Lennie (16154) | about 6 years ago | (#24536923)

A server at a hosting-provider might be a nice place for this exploit. But everyone in the know, already knew this was a possible target.

Re:I'm safe, in my ADSL utopia (0, Troll)

Tony Hoyle (11698) | about 6 years ago | (#24536975)

Compared to ARP spoofing which is much simpler and gains you the entire traffic flow to an IP address? I wouldn't bother with a DNS attack to be honest. Any attack that requires you be on the local network is uninteresting just because there are so many damned ways to do it already.

Re:I'm safe, in my ADSL utopia (2)

Lennie (16154) | about 6 years ago | (#24538579)

It depends ARP spoofing is just confined to the broadcast-domain (possible a VLAN), while a DNS-server probably is used by a much broader 'audiance'.

Where it could work... (0)

Anonymous Coward | about 6 years ago | (#24536929)

Some universities provide students (and others) on their network extreme fast links. Think of the fun of hijacking your fellow students DNS queries.

Re:I'm safe, in my ADSL utopia (1)

uku (935510) | about 6 years ago | (#24538863)

Unfortunately most people on ADSL don't run their own name server, and instead use their ISPs nameserver.

I wouldn't be so sure about that. My DSL provider (Qwest) gave me an Actiontec router / firewall. Its internal DHCP server hands out the box's LAN IP 192.168.0.1 to clients as a DNS server. It appears to be a caching forwarder.

So most DSL users with these boxes are running their own DNS server, they just don't know it.

BTW, the Actiontec runs Busybox under the hood. I thought about hacking it but decided it wasn't worth the trouble and instead replaced its DHCP, DNS and WLAN AP functions with a Linksys WRT54GL running DD-WRT, leaving the Actiontec as just a gateway. This solved a major performance problem on the LAN.

Re:I'm safe, in my ADSL utopia (1)

NetCow (117556) | about 6 years ago | (#24536925)

No, people are home are not safe since their ISP nameservers are unlikely to run at people's homes... DNS servers typically reside on high bandwidth links.

Re:I'm safe, in my ADSL utopia (0)

Anonymous Coward | about 6 years ago | (#24536971)

who actually uses their ISPs nameserver? mine at least sucks hard (Green Mountain Access), not saying OpenDNS is any safer, but when you have it, why use the one from your ISP?

Re:I'm safe, in my ADSL utopia (0)

Anonymous Coward | about 6 years ago | (#24539035)

In some cases, DNS to non-ISP servers is blocked. This is typically excused with "oh noes think of the childern" child porn scaremongering (no, really) - in some jurisdictions, they use the DNS lookup logs as "evidence" (yes, you and I know how easy it would be to frame someone. The authorities like it that way), so all DNS except to the ISP's logged DNS is blocked.

Re:I'm safe, in my ADSL utopia (1)

Firehed (942385) | about 6 years ago | (#24536935)

You local machine's cache is probably safe, yes (or reasonably so). What about your ISP's, which in all likelihood you're using when you don't have a local cache of the required information? Not only are you vulnerable to that, but so is everyone else using your ISP.

Isn't it a birthday attack? (2, Interesting)

Timothy Brownawell (627747) | about 6 years ago | (#24536883)

Why can't the resolvers make sure to never have multiple outstanding requests that could potentially give the same answer? Check the cache for known zone boundaries and implied non-boundaries (if the server for foo.com also answers requests for x.y.z.foo.com, there's no zone boundary in between), and only send one request crossing a particular potential boundary at a time to a particular server (like a.c.foo.com and b.c.foo.com, we don't know yet that .c.foo.com is answered by the same server as .foo.com, since nothing under that domain is in the cache).

Re:Isn't it a birthday attack? (1)

Tony Hoyle (11698) | about 6 years ago | (#24537037)

They do, mostly. There's a certain amount of caching built in at all levels these days (which is why for example on windows you have to do ipconfig/flushdns sometimes if DHCP changes the address of a machine).

Re:Isn't it a birthday attack? (1)

boto (145530) | about 6 years ago | (#24537907)

If I understood correctly, it is not a matter of having multiple outstanding requests, necessarily. You only need to make sure your (the attacker) reply to a request get to tghe caching server before the legitimate reply. The multiple requests for different names are needed only to make sure you can try again if your previous try didn't work.

But maybe your suggestion could make the attack slower, if you reduce the number of queries to the upstream server when you are getting flooded with requests for different names under the same zone.

Re:Isn't it a birthday attack? (0)

Anonymous Coward | about 6 years ago | (#24538545)

I agree with OP.... it seems like the birthday attack part could be mitigated with only having one outstanding request at a time... maybe its just that this is extra complexity that none of the standard software impliments...

Limit the bandwidth, compare notes (3, Insightful)

CustomDesigned (250089) | about 6 years ago | (#24536891)

The exploit depends on a GigE connection to the DNS server. So a caching server behind a T1 is going to take much longer to exploit. So running your own caching server on a T1, DSL, or cable is going to be more resistant than using the ISP DNS with a fat pipe.

If there is actually 1 GigE of DNS traffic at an ISP, they could distribute the requests to 100 bandwidth limited servers. Then the attack would only manage to poison one of the servers in 10 hours. Even more interesting would be if the 100 servers could compare notes to detect the poisoning.

Re:Limit the bandwidth, compare notes (3, Insightful)

Tony Hoyle (11698) | about 6 years ago | (#24536943)

A decent firewall could be trained to recognize an attack like this take preventative action easily enough - to even get it to work you'd have to saturate the link with packets hoping to get a 'hit'.. So you can do it in gigE in 10 hours. You can attack just about any connection based system using similar methods, but you'd have to saturate the link and it'd get noticed... especially if you did it at gigE bandwidth for 10 hours!!

Re:Limit the bandwidth, compare notes (1)

Timothy Brownawell (627747) | about 6 years ago | (#24536965)

What sort of preventative action? This already relies on the packets looking like the come from the real nameserver, so you can't just block them without cutting off large parts of the DNS hierarchy from your customers...

Re:Limit the bandwidth, compare notes (2, Insightful)

Tony Hoyle (11698) | about 6 years ago | (#24537003)

The packets won't look like that though will they - at that bandwidth they'd have to be on the local network so they'd be coming from a different source mac (and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack).

Worst case you shut down the DNS server and everyone drops to the backups until the attacker is traced and shut down.

Re:Limit the bandwidth, compare notes (1)

Timothy Brownawell (627747) | about 6 years ago | (#24537193)

at that bandwidth they'd have to be on the local network

Or be a medium-large botnet.

(and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack)

So why was the original problem considered to be such a big deal? Any DNS poisoning attack requires that you pretend to be the real DNS server, so if it's only possible from the local network why was that big coordinated patch worth the effort?

Re:Limit the bandwidth, compare notes (1)

geniusj (140174) | about 6 years ago | (#24537479)

There's a surprising number of providers that don't do egress source filtering. I definitely wouldn't rely on other peoples' security.

Re:Limit the bandwidth, compare notes (1)

GuldKalle (1065310) | about 6 years ago | (#24537041)

I'm no expert, but would asking twice make it ^2 harder to get a hit?

Re:Limit the bandwidth, compare notes (1)

electrostatic (1185487) | about 6 years ago | (#24539975)

I'm less of an expert, but I think you may be correct. The attacker "asks for aaaae3fcg.bankofamerica.com and also sends 100,000 responses to that query to that same recursive DNS server" (copying an AC's example, above). The attacker does not see the DNS server's UPD packet and consequently hopes a match happens with one of the 100,000 responses. Assume the probability of success is p. Under your suggestion where the server must send a second query the probability of the attacker succeeding twice in a row becomes p^2.

But, if p is close to 1, say 0.9, then p squared is 81%, not too bad.

Being a non-expert about protocol details, a thought is that if the attacker's response is weird (technical term) then the DNS server query should be repeated more than a few times. What is "weird" here? Expiration time, IP physical location change, ... But his might place too much demand on the server to be practical.

Maybe this is a better idea. DNS servers currently ignore responses with the wrong port number: they toss away all failures until they get a match. Furthermore, they also ignore all invalid responses that arrive AFTER a match. Example -- of the attacker's 100,000 responses 53,999 arrive before the hit and 46,000 after. So I propose that these failures be counted, both before a match and after a match. If either count hits a limit then respond accordingly. In particular, do not update the cache.

Re:Limit the bandwidth, compare notes (0)

Anonymous Coward | about 6 years ago | (#24537173)

perhaps and someone I know who knows the guy that discovered this originally claims he has firewall rules that would protect an unpatched dns server. However that just leads to DoS attacks potentially. If you want to shut down a website there are two ways, one would be to DoS the server itself, the other is to DoS the biggest group of people that are going to use the server. While this is harder to do because there are more servers out there, if you hit the top 10 ISPs in a given region you would probably affect most users in that region. Spoofing UDP packets is trivial, and if you were to flood those servers, and they detected it and blocked that server, then real responses wouldnt get through either. If you rate limit it you can still run into the same problem, although it would take a more constant attack to circumvent.

Firewalls are nice and I think they should be used, but this problem has side effects when they are used the way most people will use them that can cause problems equal to if not greater than what they are trying to protect.

Re:Limit the bandwidth, compare notes (1, Interesting)

Anonymous Coward | about 6 years ago | (#24537051)

or when updating your cache, compare with your cached copy, and if different ask again to double check.

Double check (1)

CustomDesigned (250089) | about 6 years ago | (#24537615)

or when updating your cache, compare with your cached copy, and if different ask again to double check.

That is the best idea I've heard yet.

Re:Limit the bandwidth, compare notes (0)

Anonymous Coward | about 6 years ago | (#24537395)

Distributing the requests across 100 servers kind of defeats the purpose of caching. At that point you might as well have cached entries expire after 10 hours or less...

Re:Limit the bandwidth, compare notes (0)

Anonymous Coward | about 6 years ago | (#24537433)

Double-checking might not be a bad idea, especially at the resolver level-- make sure your upstream DNS is not poisoned by checking against another DNS. This doesn't solve the problem, of course, but it makes an exploit a lot harder. The obvious downside is a doubling of traffic, which will be an issue if you have a DNS machine or line near capacity.

Re:Limit the bandwidth, compare notes (1)

POTSandPANS (781918) | about 6 years ago | (#24539545)

From what I read of the original attack, the poison response has to make it back to the server before the correct response. Unless the correct response takes hours to arrive (instead of milliseconds), it seems unlikely that poisoning could happen.

maybe I'm not understanding the original attack correctly?

Re:Limit the bandwidth, compare notes (0)

Anonymous Coward | about 6 years ago | (#24539683)

And a DNS server behind a T1 or something similar is a *very* easy ddos target. Maybe for small networks where you trust your users you can get away with something like this, but for any decent sized network this will not work.

GigE connections are (and will be) fairly common (1)

shanec (130923) | about 6 years ago | (#24539745)

I'd just like to point out that GigE connections are becoming more and more available. Most corporate networks that are being built, or upgraded are being built with GigE to the desktop. Granted, most are set to 100k, but all it takes is one non-so-tech-savvy manager that demands er..."requires" GigE be turned on for his desktop, to be infected by spyware / some random virus.

The rest of the story will be blamed on the DNS administrator.

Gigabit link? (1)

ivoras (455934) | about 6 years ago | (#24536993)

So, the Internet at large is safe (at least as safe as before) until most computers are connected with gigabit links?

Re:Gigabit link? (1)

Tony Hoyle (11698) | about 6 years ago | (#24537031)

The internet at large is safe until either:

1. Everyone is connected by a gigabit cable to a common nameserver, and the admin of the nameserver is too stupid to realize that their dns being saturated with bogus packets at gigE speeds for 10 hours is not normal.
2. Both ISPs and routers for some reason decide stop filtering source addresses so that such an attack is possible without being directly connected.

Re:Gigabit link? (1)

Antibozo (410516) | about 6 years ago | (#24539243)

The internet at large is safe until either:

1. Everyone is connected by a gigabit cable to a common nameserver, and the admin of the nameserver is too stupid to realize that their dns being saturated with bogus packets at gigE speeds for 10 hours is not normal.

2. Both ISPs and routers for some reason decide stop filtering source addresses so that such an attack is possible without being directly connected.

3. Attackers find a way to remotely deploy and control malware on hundreds of thousands of computers in some sort of "malnet" or "botweb" or something.

Oh, wait...

That's a lot of bandwidth (1)

TheLink (130905) | about 6 years ago | (#24537073)

Good thing my ISP (TM Net/Streamyx) sucks eh? They're not even giving me the 512kbps I paid for.

Let's see 10 hours * 1Gbps / 512kbps = 2.22 years.

If you have a 10Mbps link that makes it 41 days.

I think I would have made a dns request and got the valid dns reply into my cache before the 2 years are up. Or my connection would have gone down and I'd get a different IP by then. Thanks to TM Net for protecting me from such attacks ;).

Either that or I'll be safe because the site would have DoSed me off the net with the flood of udp packets at 1Gbps...

BTW I'm using djbdns (any bets on whether my ISP would have patched their nameservers already? ;) ). So I'm not sure if the attack might work as well.

set cache timeout significantly less than attack? (1, Informative)

mlksys (93950) | about 6 years ago | (#24537113)

I am not an expert on the problem.

Is it possible that configuring cache timeout on the DNS server to be significantly less than the non trivial attack time might avoid the problem?

I assume once cache is poisoned that that poison does eventually timeout in the cache unless the attack is continuous?

I guess it's time... (1)

Spy der Mann (805235) | about 6 years ago | (#24537115)

for DNSv2.
(whatever that means)

Re:I guess it's time... for Secure DNS (4, Insightful)

mibh (920980) | about 6 years ago | (#24537509)

It's long past time for Secure DNS, which is a combination of TSIG+TKEY, SIG(0), and DNSSEC. End to end crypto authentication. Protects not just against off-path spoofed-source attacks like Kaminsky's, but also on-disk attacks against zone files, and provider-in-the-middle attackers who remap your NXDOMAIN responses into pointers to their advertising servers.

Sadly, it's a year away even if everybody started now, and most people want to be last not first, so very few people have started, and some of those people are saying "why bother, if it's not an instant solution there's no point to it, let's scrap the design and start over." (Had it not taken 12 years to get Secure DNS defined, then the prospect of doubling that time would not daunt me as much as it does.)

So, everybody please start already. NSD and Unbound from NLNetLabs supports DNSSEC. So does BIND, obviously. Sign your zones, and if your registrar won't accept keys from you, send them to a DLV registry [isc.org] while you wait for that. Turn on DNSSEC validation in your recursive nameservers. Write a letter to your congresscritter saying "please instruct US-DoC to give ICANN permission to sign the root DNS zone." In the time it would take for this Russian physicist's attack to work over your 512K DSL line (2.2 years, I heard?) we could completely secure the DNS or at least the parts of DNS whose operators gave a rat's ass about security (which is not the majority but it certainly includes your server, right?)

Re:I guess it's time... for Secure DNS (0)

Anonymous Coward | about 6 years ago | (#24537723)

Thanks for beating me to the punch. As stated in the CERT KB article at http://www.kb.cert.org/CERT_WEB%5Cservices%5Cvul-notes.nsf/id/800113, "It is important to note that without changes to the DNS protocol, such as those that the DNS Security Extensions (DNSSEC) introduce, these mitigations cannot completely prevent cache poisoning." So DNSSEC is a viable solution.

Additionally, the folks at Emergingthreats.net are re-writing the Snort DNS preprocessor to detect DNS cache poisoning attacks on cryptographically-weak session IDs. http://www.emergingthreats.net/content/view/90/1/

Re:I guess it's time... for Secure DNS (1)

Antibozo (410516) | about 6 years ago | (#24539173)

Sign your zones, and if your registrar won't accept keys from you, send them to a DLV registry while you wait for that.

People who are interested in signing their zones may want to read up on how things work at www.dnssec.net [dnssec.net] and take a look at the Sparta tools [dnssec-tools.org] . It's really not difficult, and there is a lot of information out there.

Not surprised (1)

Todd Knarr (15451) | about 6 years ago | (#24537375)

I'm not surprised. Port randomization doesn't make the attack impossible, just harder. It doesn't eliminate the birthday attack, it just increases the space you have to blanket to generate a collision. The only real fix for the attack is DNSSEC, allowing the software to reject forged responses completely. Short of that, I can only think of two more things that'd help:

  • Ignore additional data in responses, or at least additional data not responsive to the query itself. This goes beyond bailiwick checking. It means, on non-delegation responses, ignoring all additional data that's not for the exact same name as the query was for. On delegation responses, only A records for the names given in NS responses are looked at, nothing more (yes, that still leaves a hole for the attack to work on delegation responses).
  • Implement a request/response queue in the software. When a new request arrives, if it matches a request in the queue it's attached to that and no new request is sent out. Requests are split so that partial matches result in new requests only for the portions that don't already have a request pending. When received, responses are attached to their respective requests and the requests flagged as complete. After a short interval, completed requests have their responses collected and sent back to the querent. If multiple responses arrive for the same request, all responses to that request are dropped and a new query is generated and sent out. The headache here is appropriate safeguards against a DoS attack.

These don't completely close down the attack, but I can't think of anything else that makes it harder short of having responses cryptographically signed and the signature verified as coming from the expected source. We need to start pounding on domain owners to implement DNSSEC. Yes, that's work. Yes, it's neccesary.

DJB's take . . . (5, Informative)

geniusj (140174) | about 6 years ago | (#24537529)

For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

Re:DJB's take . . . (2, Informative)

vic-traill (1038742) | about 6 years ago | (#24539937)

For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

I went and had a look at the thread (dated from Jul 30 2001) referenced in the excerpt at djb's site (follow the posting link in the URL above). As far as I can tell, Jim Reid was pooh-poohing the usefulness of port randomization, the approach used as an emergency backstop against Kaminsky's attach just over seven years later. To be fair, Reid was doing so in the context of advocating for Secure DNS.

djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I understand from reading through these docs, that in 2001, djb's dnscache performed the port randomization that everyone's been scrambling to deploy over the past several weeks for other implementations, including BIND?

Or am I mis-interpreting here?

32 bit guess vs. 16 bit guess. (1)

Animats (122034) | about 6 years ago | (#24537623)

Right. Before the fix, you had to guess a 16-bit number. After the fix, you have to guess a 32-bit number. About 10 hours on a gigabit Ethernet should let you try the necessary 4 billion packets. This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.

If DNS used a 64-bit or 128 bit number to tie the response to the request, and the DNS client had a crypto-grade random number generator, guessing would be hopeless. An intermediate technical fix would be to define a "DNSv2" message format, with a 128-bit random message ID and a rule that no DNSv2 client will accept an answer to a question it didn't ask. (Some of the attacks depend upon an attacker forcing a query for "1245.example.com", which won't be found, and a phony DNS blindly blasting replies with random IDs for "www.example.com", which is accepted because it's in the same "bailiwick".) If everybody other than desktop clients went to an improved DNS, and desktop clients talked only to an improved server, we'd be reasonably OK.

DNSSEC, which has a whole signed-certificate chain like SSL, may be a way out of this, but it's much more complex than the existing DNS.

Re:32 bit guess vs. 16 bit guess. (2, Interesting)

LarsG (31008) | about 6 years ago | (#24537979)

This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.

True. On the other hand, if you are on the same network segment then there are many other options available to you if you want to do evil. Blasting about 4 terabytes (1 Gb/s for 10H) at a DNS server isn't exactly a quiet attack, so if you intend to stay below the radar you're probably a lot better off trying some good old arp spoofing or tcp hijacking instead.

Egress filtering? (0)

Anonymous Coward | about 6 years ago | (#24537867)

Perhaps now is a good time for ISPs everywhere to start doing egress filtering? There is NEVER a good reason for a forged IP address to leave a network.

More sites need to implement DNSSEC, (1)

sega01 (937364) | about 6 years ago | (#24538413)

Especially the TLDs. Very few TLDs have DNSSEC which would make this attack practically impossible. IPv6 would allow for more source addresses as well, which is discussed in the link below. If you run a recursive resolver I highly advise using Unbound. It is the most secure resolver I know of and has an incredible amount of thought put into it (without BIND's bloat). It has many provisions for DNSSEC-less zones. See: http://www.unbound.net/documentation/patch_announce102.html [unbound.net]

Options I see (1)

dayton967 (647640) | about 6 years ago | (#24538791)

I see many options as potential to limit the security risks. 1) DNS moves to TCP only, the negative side to this, DNS becomes slower. 2) A simple solution, might be to modify the resolvers, so they resolve from multiple sources, and compare the responses. These could both be from the same server or from companion servers in the same network. 3)Add a token to the DNS query. This could be done via a hash with the hostname and a salt. 4) Single level Recursion, each level will only request from an upstream dns server, allows for limiting access, but requires a complete overhaul of the DNS infrastructure. 5) DNSSEC

Re:Options I see (1)

essinger (781940) | about 6 years ago | (#24539995)

So how many of those could be implemented without breaking a significant portion of the internet infrastructure that depends on DNS working the old way?

GigE pipe? (1)

POTSandPANS (781918) | about 6 years ago | (#24539293)

If this happened to my DNS servers, I don't think I'd even notice it as poisoning right away. I'd likely just assume it was a DoS attack and deal with it accordingly. I also have my DNS servers on 100 Meg ports so unless I were on vacation, I'd likely notice it long before the cache got poisoned.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>