Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How CDNs and Alternative DNS Services Combine For Higher Latency

Soulskill posted more than 4 years ago | from the taking-the-bad-with-the-good dept.

Networking 187

The_PHP_Jedi writes "Alternative DNS services, such as OpenDNS and Google Public DNS, are used to bypass the sluggishness often associated with local ISP DNS servers. However, as more websites, particularly smaller ones, use content distribution networks via embedded ads, widgets, and other assets, the effectiveness of non-ISP DNS servers may be undermined. Why? Because CDNs rely on the location of a user's DNS server to determine the closest server with the hosted content. Sajal Kayan published a series of test results which demonstrates the difference, and also provided the Python script used so you can test which is the most effective DNS service for your own Internet connection."

Sorry! There are no comments related to the filter you selected.

Leave Canada Alone (4, Funny)

ironicsky (569792) | more than 4 years ago | (#32388814)

Why drag us lovely CDN's in to this.

Re:Leave Canada Alone (0)

Anonymous Coward | more than 4 years ago | (#32388954)

I find it kind of funny that even slashdotters suggest using OpenDNS instead of your ISP's DNS server if someone is asking about the NXDOMAIN advertisement serving.

I've seen this on every discussion. An ISP x starts serving advertisement on non-existing domains -> people here jump on and say they should use OpenDNS, while they also serve the exact same ads by default and are also working for profit. Your ISP is billing you for internet connection too so the NXDOMAIN hijacking is just extra business - OpenDNS is solely based on that.

Just because the name has "Open" doesn't mean it's good.


Re:Leave Canada Alone (2, Informative)

PNutts (199112) | more than 4 years ago | (#32389094)

Why wouldn't I use OpenDNS? They may be working for profit but it is free to individuals. Also, I disagree that they are the "exact same ads" when they consist of a few text links and I trust them more than Comcast. But more importantly, assuming you were correct that they are the same ads, the other benefits far outweigh this nit. The ability to whitelist/blacklist domains and block them by category is more than worth the price of admission, which again is free. Then throw in useage reports... To ignore all that because of the "exact same ads" is shortsighted. The company I work for started using this and the incidents of crapware have gone way down. I've set it up on all my family's computers and recommend it to others.

Re:Leave Canada Alone (4, Insightful)

Anonymous Coward | more than 4 years ago | (#32389614)

For one, because they're deliberately abusing the Open moniker. They also do not provide an ad-free DNS service, unlike for example Google's DNS server. Furthermore, they redirect through OpenDNS servers. Last but not least, to change the configuration (e.g. the Google redirection or the NXDOMAIN highjacking), you have to get an account and always log in. For DNS. Are you kidding me?

Re:Leave Canada Alone (3, Informative)

Sillygates (967271) | more than 4 years ago | (#32389976)

This still violates the DNS specification, and there is no way to effectively turn it off. Why is this a problem? Please see: [] .

For this reason I use Internet2, Level 3's ( -, and now google's dns servers.

REWARD IS OFFERED (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#32388858)

Have You Seen This Pair? Call 800-555-1212 REWARD []

Parent is NSFW (1, Informative)

Anonymous Coward | more than 4 years ago | (#32388870)

"Pair" in question is a pair of nipples, apparently.

Re:Parent is NSFW (0)

Anonymous Coward | more than 4 years ago | (#32388896)

Biggest pair I've ever seen. Being a slashdotter I have seen plenty.


Anonymous Coward | more than 4 years ago | (#32389032)

I would love to fuck her sweet pussy.


Anonymous Coward | more than 4 years ago | (#32389156)

i bet she actually has a penis


Anonymous Coward | more than 4 years ago | (#32389332)

i hope she does


Anonymous Coward | more than 4 years ago | (#32389478)

I'm glad that between 'tits or gtfo' she made the right choice.

Poor application design (3, Insightful)

Mondo1287 (622491) | more than 4 years ago | (#32388888)

Maybe I'm missing something here, but shouldn't be the application's responsibility to provide a geographically correct host name to the client, not the responsibility of DNS? It seems like poor application design to rely on DNS for this. Your app should determine the host based on the IP of the client, not give the client an arbitrary host name and then rely on DNS to provide your geologically correct server.

Re:Poor application design (3, Informative)

jcinnamond (463196) | more than 4 years ago | (#32388926)

I think you're missing the point. Geographically aware DNS is used to send you to your nearest deployment of an application. Deciding after you've arrived is too late.

Re:Poor application design (2, Interesting)

Zerth (26112) | more than 4 years ago | (#32389400)

Like you couldn't redirect on GET instead of serving up the app?

Re:Poor application design (1)

mother_reincarnated (1099781) | more than 4 years ago | (#32389640)

Sure you can! If you don't mind effing up the URL bar and possibly generating certificate warnings.

It's not a clean nor transparent way to do it.

Re:Poor application design (2, Interesting)

Trepidity (597) | more than 4 years ago | (#32389542)

There's various tricks you can do to decide later, if you have significant content other than the raw HTML page itself, though they do require some server processing. The initial HTML request will be based on DNS, but once the user's hit your servers, you have their IP, so you can rewrite the URLs of embedded content / AJAX requests / whatever, so that they hit a geographically nearby server.

Re:Poor application design (0)

Anonymous Coward | more than 4 years ago | (#32389234)

Yes, you are. You'd be surprised what can be achieved by clever use of DNS. For example, you can also use it to reduce spam.

Re:Poor application design (0)

Anonymous Coward | more than 4 years ago | (#32389350)

There is a better way to do all of this anyway: Have OpenDNS use anycast and only provide a couple of IP addresses for their DNS servers. Anycast ensures that you get the nearest one, and then CDNs can do the same trick with OpenDNS as they do with your ISP.

Block? (0, Flamebait)

betterunixthanunix (980855) | more than 4 years ago | (#32388894)

Perhaps it is time to block these CDN "services?"

Do you even know what a CDN is? (2, Insightful)

Anonymous Coward | more than 4 years ago | (#32388932)

Yeah, go ahead and block them. Try it. Do you know what happens? Most of the web sites you use just won't fucking work. This is especially true with so many web sites these days serving up their images, JavaScript scripts and stylesheets via a CDN.

Re:Do you even know what a CDN is? (1)

betterunixthanunix (980855) | more than 4 years ago | (#32388974)

Sounds like the pages will load even faster? I already block Javascript, Flash, and other trendy web technologies, and to be honest, if a webpage looks less "pretty" because I also blocked images and stylesheets, I can deal with it. If the images are really that important, then I can just load them as needed.

Re:Do you even know what a CDN is? (0)

Anonymous Coward | more than 4 years ago | (#32389044)

So normally you browse with automatic download of images setting turned of, don't you? Even better, use LINKS! No no!! Much better: don't use internet at all!

Re:Do you even know what a CDN is? (2, Interesting)

betterunixthanunix (980855) | more than 4 years ago | (#32389070)

  1. The Web is not the be-all and end-all of the Internet
  2. Browsing without autoloading images is not nearly as bad as you make it out to be
  3. Most of what I go on the web for is news (where the text is usually more important) and journal articles (which are distributed as PDFs)

As a case in point, Slashdot is perfectly fine without images or Javascript (as long as you request Javascript-free pages, which are readily delivered).

You're a man after my own heart! apk (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32389154)

"The Web is not the be-all and end-all of the Internet
Browsing without autoloading images is not nearly as bad as you make it out to be
Most of what I go on the web for is news (where the text is usually more important) and journal articles (which are distributed as PDFs)

As a case in point, Slashdot is perfectly fine without images or Javascript (as long as you request Javascript-free pages, which are readily delivered)." - by betterunixthanunix (980855) on Saturday May 29, @11:23AM (#32389070)

Additionally, per my subject above? You probably rarely, IF EVER, see a malware (that's while running Windows too, the most attacked OS there is) based on what you stated!

(I know I don't + I have not for, well... easily going on more than a decade++, & neither do users I have "turned on" to the very points you enumerated above also).

Most of the malware infestations out there nowadays are put into users' systems off of bogusly malscripted websites &/or bogusly scripted HTML emails (as well as users, more on that below, and what they download too).

This guide covers what you speak of & I expound upon here, and implements what's largely been called the concept of "layered security" for users of modern Windows NT-based Operating Systems (2000/XP/Server 2003, & even VISTA/Server 2008/Windows 7 too):


HOW TO SECURE Windows 2000/XP/Server 2003, & even VISTA/Windows 7 (+ make it "fun-to-do" via CIS Tool Guidance & beyond): []


It works, & is based on the concept of what many computer security folks the past few years have been calling "LAYERED SECURITY"...


---- []

"the use of the hosts file has worked for me in many ways. for one it stops ad banners, it helps speed up your computer as well. if you need more proof i am writing to you on a 400 hertz computer and i run with ease. i do not get 200++ viruses and spy ware a month as i use to. now i am lucky if i get 1 or 2 viruses a month. if you want my opinion if you stick to what APK says in his article about securing your computer then you will be safe and should not get any viruses or spy ware, but if you do get hit with viruses and spy ware then it will your own fault. keep up the good fight APK." - Kings Joker, user of my guide @ THE PLANET

AND []

"I recently, months ago when you finally got this guide done, had authorization to try this on simple work station for kids. My client, who paid me an ungodly amount of money to do this, has been PROBLEM FREE FOR MONTHS! I haven't even had a follow up call which is unusual." - THRONKA, user of my guide @ XTremePcCentral


"APK, thanks for such a great guide. This would, and should, be an inspiration to such security measures. Also, the pc that has "tweaks": IS STILL GOING! NO PROBLEMS!" - THRONKA, user of my guide @ XTremePcCentral

AND []

"Its 2009 - still trouble free! I was told last week by a co worker who does active directory administration, and he said I was doing overkill. I told him yes, but I just eliminated the half life in windows that you usually get. He said good point. So from 2008 till 2009. No speed decreases, its been to a lan party, moved around in a move, and it still NEVER has had the OS reinstalled besides the fact I imaged the drive over in 2008. Great stuff! My client STILL Hasn't called me back in regards to that one machine to get it locked down for the kid. I am glad it worked and I am sure her wallet is appreciated too now that it works. Speaking of which, I need to call her to see if I can get some leads. APK - I will say it again, the guide is FANTASTIC! Its made my PC experience much easier. Sandboxing was great. Getting my host file updated, setting services to system service, rather than system local. (except AVG updater, needed system local)" - THRONKA, user of my guide @ XTremePcCentral


(Those results are only a SMALL SAMPLING TOO, mind you - I can produce more such results, upon request, from other users & sites online)


Human beings, & they not being 'disciplined' about the indiscriminate usage of javascript (the main "harbinger of doom" out there today online & anyone can verify that much, simply by visiting a security-oriented website, such as SECUNIA.COM or SECURITYFOCUS.COM & see their stats on that much (like 90% or better being caused by javascript misuses)), OR, what they download for example... King's Joker above tends to "2nd that motion" (& there is NOTHING I can do about that! Per Dr. Manhattan of "The Watchmen", ala -> "I can change almost anything, but I can't change human nature")


P.S.=> One thing I can guarantee folks, per Kings' Joker's testimonial above alone? Faster speeds online by blocking out adbanners & scripts served up from CDN's... this is certain, I can assure you, as well as less malware infestations (because adbanners have been found many times the past 5++ yrs. online now with malware code in them too, no less, also (my guide has a few concrete & easily verified examples thereof in its content too in that regards)... apk

Re:You're a man after my own heart! apk (2, Interesting)

LordLimecat (1103839) | more than 4 years ago | (#32389558)

Neither infected PDFs nor Java rely on javascript. An ad in a DIV will infect you just fine.

My guide covers your points, & questions... ap (0)

Anonymous Coward | more than 4 years ago | (#32390038)

"Neither infected PDFs nor Java rely on javascript." - by LordLimecat (1103839)
on Saturday May 29, @12:38PM (#32389558)

For years, using javascript inside Adobe Reader would get you infested by malwares via malscripting possible in them (but, my guide has covered that, for years, in suggesting that IF you have to use Adobe Reader, turn off its javascript to avoid this)...

So - Are there other forms of attack in a PDF? Yes, so I have heard (recently only) however, I have yet to hear tell of one "in the wild" & being other than a 'proof of concept' (can you show me differently?)...

I also cover JAVA in that guide too iirc, and don't recommend using it on the public internet & especially from sites you don't trust or worse, know (even though I am a JAVA coder also, it has its 'downsides' too).


"An ad in a DIV will infect you just fine." - by LordLimecat (1103839)
on Saturday May 29, @12:38PM (#32389558)

How can it do so, if I cut off any possible use of java, javascript, or other forms of browser scripting (VBScript etc.) &/or browser addons/plugins AND IFrames/Frames (Opera lets you do this easily)?

The DIV tag can use things like making invisible IFrames & such as noted above, but it also has its script mouse click, mouse over, and keyboard events... however, if you use a custom cascading style sheet (Opera, IE, and FireFox let you do this, and such .css files are available online for this, along with using PAC files too) that limits various tags being usable!

Those measures SHOULD stop that from being effective as well, in combination with some browser allowing you to turn off IFrames + scripting!

(Alongside stalling scripting being active in your browser (on every site there is, yes, use it where you have no choice and have to for the page to work, but then? Well, you take your chances is all)).


P.S.=> Can you show me otherwise, as to the points in my closing paragraph above? This is a new take on the use of that tag to me, & the ONLY "dangerous points" of its use I know of I covered above (IFrames plus mouse/keyboard/click events), so please respond when you get a chance here... thanks! apk

Modded down, with no reasons why? LMAO... apk (-1, Troll)

Anonymous Coward | more than 4 years ago | (#32389776)

To whom it may concern:

You might do your "hit & run" with no reason why EFFETE mod down, but, it doesn't stand up very well vs. the testimonials to the effectiveness of the guide I put up (which uses a good deal of the ideas that betterunixthanunix basically also follows to a large extent).

That guide is based on the concept of what folks in security have been calling "layered security" the past few years now, and yes: IT WORKS!

(I've (and others) been using its points here and on the job for more than a decade++ online successfully with years of solid uninfested uptime on my OS setups (in addition to the small VERY PARTIAL ONLY list of testimonials to its effectiveness others are enjoying also after they applied its points to their own setups, those of their customers, and their friends & family also)).

POINT-BLANK/BOTTOM-LINE: In doing your "hit & run" down-mod, all you've done is show you're nothing more than the REAL troll here, and one that's quite obviously afraid to backup his own b.s. in his down moderation of my post, vs. what I could do in response that would disprove the "so called points" you might have made... I've been there & done that, too many times here on this forums (especially those one).


P.S.=> And, you modded me down as "troll"? I wonder who the troll is here:

Myself showing others a way that has taken others to "the land of no more malware infestations", per their quoted testimonials of success using the techniques noted in my security-guide which from 2008 onwards it's:

1.) Gone well over 350,000 views worldwide

2.) It's consistently been rated 5/5 stars at 15/20 forums it's featured on

3.) It's usually made a "pinned/sticky thread" or "essential guide" on those technical forums also

4.) It's also usually in the "top most viewed" post (top 1-10 most times in fact) on the forums its featured on

5.) As well as it getting me paid by winning PCPitstop's monthly contest for best most useful postings they have (and years before it too, because I wrote the first & oldest such guide for Windows NT-based OS users from as far back as 1997 online (@ as their original "Article #1" 1998-2003, which started as a speedup guide, and grew into a security-oriented one also, to its present incarnation in the URL above))...


The blatant coward who modded me down w/out offering even a SINGLE SOLID REASON why, and ran (like a damned coward, because I'll debate AND DISPROVE any point you had to make and I'll end up on top, everytime, with facts vs. your cowardly mod downs)... apk

Re:Block? (1)

jon3k (691256) | more than 4 years ago | (#32389058)

Why is "services" in quotes?

Re:Block? (0)

Anonymous Coward | more than 4 years ago | (#32389120)


Re:Block? (1)

hedwards (940851) | more than 4 years ago | (#32389686)

Because he's using in non-literally. The CDNs don't provide a service to the people that have to put up with them in most cases. It personally pisses me off to have to loosen up so thoroughly on noscript for a website to be even able to figure out if I want to see the content. Worse is that few sites if any actually disclose what sites they allow to connect in that fashion meaning that you don't necessarily know whether a particular site is meant to be loading content. It's just an easy way of them losing your information then not being responsible for the consequences.

Re:Block? (1)

jon3k (691256) | more than 4 years ago | (#32389768)

"The CDNs don't provide a service to the people that have to put up with them in most cases."

I'm going to go out on a limb and say that in "most cases" CDNs do in fact provide a service of providing faster access to content. There are problems, like the one this story points out, but they definitely do provide a useful service.

Is this a problem? (1)

pjt33 (739471) | more than 4 years ago | (#32388902)

How many of the resources hosted by CDNs are things which we're already stopping with various ad blocking techniques, and how many are content we actually care about?

Re:Is this a problem? (4, Informative)

Burdell (228580) | more than 4 years ago | (#32389024)

It isn't just ads. For example, Microsoft, Apple, Symantec, and Red Hat use CDNs for distributing software updates (that's just a few companies I know of off the top of my head). Basically, CDNs keep the Internet working, saving server load at the source and bandwidth across the Internet and at the providers.

Re:Is this a problem? (1)

betterunixthanunix (980855) | more than 4 years ago | (#32389038)

Sounds like a system of mirrors to me. Now, the more relevant question: why is it using DNS to try to determine my location?

Re:Is this a problem? (1)

maxume (22995) | more than 4 years ago | (#32389102)

To transparently reduce latency and hops.

Re:Is this a problem? (3, Interesting)

betterunixthanunix (980855) | more than 4 years ago | (#32389118)

As opposed to using the client IP address?

Re:Is this a problem? (1)

maxume (22995) | more than 4 years ago | (#32389190)

That wouldn't be transparent to the application server.

Re:Is this a problem? (1)

Dahamma (304068) | more than 4 years ago | (#32389956)

It's not as much a matter of "using DNS" to determine your location, it's using the IP of the DNS server asking for the CDN's IP to determine your location - with the possibly faulty assumption that your DNS server is near you geographically.

The problem is the CDN's DNS only sees your DNS IP, not your computer's IP.

There are actually a couple solutions to this, though. The most efficient one is Google's proposed extension to DNS that basically forwards the IP of the host originally making the request.

Another solution is for the application server to notice that your IP is not the best for the CDN, and do an HTTP 302 redirect to the correct one. That adds a bit more latency for the orignal request, so is best for larger files (like streaming video, etc). Plus, it only works for HTTP.

Re:Is this a problem? (0)

Anonymous Coward | more than 4 years ago | (#32390018)

If you mean waiting until you make a connection, then its too late. First off you'll already have to make the first request to a non-local server, which becomes a huge bottleneck for surges of new connections.

Then actually redirecting means you cant just change what '' is pointed to, you'll have to redirect to or whatever localized dns you give it, which means your '' ssl cert has to become a more expensive and less desirable * cert or else all of ssl breaks

its not impossible, its just that doing it via dns is a much better end result. I doubt opendns or google's dns has a large enough userbase for any CDN to really worry too much about, especially since content will still work for them.

I also wouldn't be too surprised if google started selling a geolocating service to big CDNs through their dns. Especially if google can convince some smaller isps to use them instead of running their own (Don't laugh, I had an ISP tell me to use the public DNS, which I had to manually set since they didnt run DHCPd..)

Re:Is this a problem? (4, Informative)

b4k3d b34nz (900066) | more than 4 years ago | (#32389122)

The whole point of a CDN (the middle initial) is distribution, theoretically to a broad area.

For example, without a CDN, you have 3 servers, all located in San Francisco. The guy who lives in Florida (or Russia, or South America) who requests content from your server will receive it much more slowly than the guy who lives in Vegas.

With a CDN, there will be servers all over the nation (and preferably around the world, if you serve internationally) which will be physically closer to the requestor that can serve with a lower latency. The servers within the CDN farm utilize reverse DNS lookup to balance and serve traffic from the correct place.

Re:Is this a problem? (1, Interesting)

Anonymous Coward | more than 4 years ago | (#32389492)

The servers within the CDN farm utilize reverse DNS lookup to balance and serve traffic from the correct place.

No, they don't. The way this works is that there is a separate domain name for the content which is to be served by the CDN. The response DNS resource record is dynamically chosen to point to the CDN server closest (in routing terms, not geographically) to the source of the request. The reverse lookup (i.e. the canonical name of the IP address) does not play a part in this, only the routing paths to the resolver's IP address.

The request usually comes from a recursive resolver which doesn't run on the user's computer. Most often it is located in one of the end-user ISP networks though. If the user chooses to use a resolver which is located across the world from him, then the CDN will also point his browser to download from a CDN server close to the resolver, not close to the user.

Since it's not a good idea to use a resolver which is on the other side of the planet, because DNS requests are frequent and invariably must finish before anything useful can be done, meaning high latency DNS sucks, this problem is not as big as it looks. For example, you can use Google DNS ( and and the resolver will not be far away from you, because those are IP anycast addresses. There are many resolvers across the world which all respond to these same addresses and BGP (the Border Gateway Protocol) routes your packets to the closest one.

Re:Is this a problem? (0)

Anonymous Coward | more than 4 years ago | (#32389958)

I meant to say reverse IP in my post not DNS. I just didn't go into as much detail as you and I had a typo. Thanks for the correction.

Re:Is this a problem? (4, Interesting)

Professor_UNIX (867045) | more than 4 years ago | (#32389110)

This is exactly the problem. Most people have probably not heard about a little company called Akamai, but chances are if you're downloading content from a large site, you're using Akamai's content delivery network. Go view a trailer on Apple's site for instance and you'll see the host is actually served off (which is Akamai). They use a distributed system of caching mirror servers to serve up content to a server closest to you geographically.

The one reason I use an open DNS server instead of my cable provider's (Cox Cable) servers is because they have an Akamai server for Cox and it was horribly overloaded. I was getting 512 Kbps anytime I was trying to download something from Apple. I switched my DNS to a combination of Level3's and Cisco's open DNS servers and I started hitting another Akamai server outside Cox and started getting 15 Mbps. It was night and day going from barely being able to watch a standard definition movie trailer on Apple's site while it buffered buffered, played, buffered, play buffered, etc. to being able to watch a 1080p HDTV stream with the buffer way ahead of my realtime viewing.

Re:Is this a problem? (0)

Anonymous Coward | more than 4 years ago | (#32389772)

I currently block * and * because it seems that all the images from there are just advertisements. I wonder what, if anything, I'm missing.

Re:Is this a problem? (2, Insightful)

pjt33 (739471) | more than 4 years ago | (#32389174)

Ok, saving network capacity I can buy as a benefit. I'm not sure that latency - the focus of TFS - is a real issue when downloading software updates, though.

Re:Is this a problem? (4, Interesting)

michael_cain (66650) | more than 4 years ago | (#32389088)

Seven or so years ago, before I retired from one of the large cable companies, CDNs were hosting the relatively static parts for a surprisingly large number of broadly popular sites. I had an opportunity to see the list when we were approached by the then-largest CDN, who wanted to place servers in many of our head-end locations for the obvious performance benefit. I was the one who pointed out that all of our internal DNS requests were routed to one of two data centers, one on the East Coast and one on the West, creating exactly the situation described in the OP: the CDN would have no idea where the original request came from, so would be unable to direct the end user to the appropriate server.

I was one of the few engineers who argued for less centralization in our network. I wanted broader distribution for reliability purposes: at that time, the massive centralized mail servers had a tendency to fail at the drop of a hat. But it would also have given us the ability to work with companies like the CDNs in order to provide better service.

Re:Is this a problem? (1)

hedwards (940851) | more than 4 years ago | (#32389712)

It's a problem of implementation. Few sites disclose that they're wanting to do it, and I had no idea until I installed noscript and started to have to enable a huge number of sites to view what should be relatively simple ones. On top of which each one can have vulnerabilities. I'm not suggesting that you were wrong, there are definite reasons why centralization is asking for trouble. But by the same token, I don't think the companies engaging in that process are as transparent, honest and responsible about it as they need to be.

Re:Is this a problem? (1)

UnderCoverPenguin (1001627) | more than 4 years ago | (#32390120)

Supposedly my ISP of 2 years ago consolidated its servers into 2 data centers about 3 years go. About 2.5 years ago, their DNS servers in 1 center went off line and the servers in the other center were overloaded, so they went down, too. Fortunately, my local head end still had a machine they could use for DNS and did so (allowing only local subscribers, of course). The nest major DNS outage they had, the local head end no longer that resource, and their routers were forcing all DNS requests to their servers, so we suffered the outage along the rest of their subscribers. I was lucky in that I was able to get the IP address of my (then current) client's VPN server, so I was still able to work (but no Slashdot that day).

Google Public DNS (3, Informative)

The MAZZTer (911996) | more than 4 years ago | (#32388928)

Automatically routes your DNS request to a Google server close to you. So there's no problem here.

Re:Google Public DNS (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#32389098)

Come on, was it so hard to use a subject of "Google DNS uses closest server"? Repeat after me: the subject line isn't merely the beginning of the body.

Re:Google Public DNS (1, Informative)

Anonymous Coward | more than 4 years ago | (#32389148)

Right. However, the problem arises with non google owned services. Like akamai CDNs.

Re:Google Public DNS (2, Informative)

arkhan_jg (618674) | more than 4 years ago | (#32389326)

But if you look at TFA, that doesn't actually work in practise - looking at, for example, the swedish EC2 host pinging -
using local DNS gives a ping of 36.3, opendns is 40 and googledns is 189!
local dns resolved IP pings at 13.2, opendms at 51.7 and googledns at 36.

In both cases, using local DNS gives a substantially faster responding server with both CDN networks tested, presumably one that is physically closer to the testing machine. Using google DNS and open DNS both result in getting less optimal servers for the actual content; so any saving in DNS resolution itself is lost due to the CDN giving you the actual website content from a sub-optimal location; especially if you're pulling down lots of different bits of content.

It's an interesting enough result that I'm going to reinvestigate using my ISP DNS for my dnsmasq local cache server (or at least one hosted in my own country), and compare total page rendering time for the sites I visit often, rather than just DNS resolution times, given how many large sites use akamai and the like these days.

and? (0)

Anonymous Coward | more than 4 years ago | (#32388944)

Great. But what the fuck is a CDN (apart from a sometime canadian currency designator) and why should I care?

Re:and? (0)

Anonymous Coward | more than 4 years ago | (#32388980)

off you go, digg spawn!

We've discussed this before (1, Insightful)

Anonymous Coward | more than 4 years ago | (#32388972)

Previous Discussion []

DNS is not and should not be a good indicator of client location. The proper solution for routing to a closer server is IP anycast.

Re:We've discussed this before (1)

mother_reincarnated (1099781) | more than 4 years ago | (#32389658)

Yeah, as long as your entire transaction consists of a single packet being sent to the server. It's not reliable after that.

no big deal, really (1)

youn (1516637) | more than 4 years ago | (#32388994)

most people don't actually care about DNS... they use the dhcp provided dns server from their ISP and don't even know how to fiddle with it... heck a lot don't even know what DNS is and will say, "DNS yourself, stop cursing :)"

let's assume for a minute that ads are less relevant... not really a big deal... because those are more likely tech savvy people (or friends of tech savvy people) who are more likely to install extensions such as adblock and get rid of ads alltogether.

plus there is the obvious for advertisers... if it is not really reliable, well don't use it, find other ways to geolocate your guy :)

Re:no big deal, really (2, Informative)

bjourne (1034822) | more than 4 years ago | (#32389066)

CDN:s (Content Delivery Networks for those who don't know. I can't believe these things have to be spelled out on /. Quality really has gone downhill) are not only used for advertising. They are used to serve all kinds of resource-intensive content. Such as porn. And streaming video. Let's say you're an Aussie with your website hosted on the Australian mainland. Then you'll pay premium for international bandwidth because Australia has crappy connections to the rest of the world. So you want international visitors to get their images and other static content you don't change regularily from a CDN to reduce your bandwidth costs.

What? (1)

Seth Kriticos (1227934) | more than 4 years ago | (#32388996)

I don't really know what benefits CDN could give me.

Anyway, I solved the sluggish ISP DNS problem with simply installing bind9 and be done with it. Setting up a DNS server on a modern system is really child's play, no need for the openDNS stuff.

(install bind9; remove DNS IP. Done - around 1 minute)

Re:What? (1)

Shakrai (717556) | more than 4 years ago | (#32389254)

Anyway, I solved the sluggish ISP DNS problem with simply installing bind9 and be done with it

Having my own DNS server is one of the reasons why I haven't abandoned my full fledged Linux box for a DD-WRT flashed router or similar solution. Running BIND locally gives me a local DNS cache, remains compatible with CDNs (since the requests come from my own IP) and avoids any DNS tracking on the part of my ISP. Since I have a fixed IP address I've also made my server available to friends who share my ISP -- no reason they should have to use it's crappy DNS servers when mine is a hop or two away, is there?

Re:What? (1)

value_added (719364) | more than 4 years ago | (#32389894)

I solved the sluggish ISP DNS problem with simply installing bind9

So instead of one problem, you now have two? ;-)

Most of the complaints I read on Slashdot invariably seem to be related to a loss of control. Seems to me that if you object to how others do things, taking charge and doing it yourself when possible is the only logical solution. For the tecnically inclined, that typically amounts to a few extra bucks per month along with, as you pointed out, some minimal work.

Re:What? (1)

John Hasler (414242) | more than 4 years ago | (#32390126)

> ...that typically amounts to a few extra bucks per month...

For what?

Slashdot uses Akamai (2, Informative)

sajalkayan (1213718) | more than 4 years ago | (#32389016) is serving static assets from the hostname which is served via Akamai CDN. I count 19 requests to* [] on a single pageload of the homepage. These static files are currently served by a server within my ISPs network rather than some server on the other side of the globe... Alamai uses DNS routing.

Re:Slashdot uses Akamai (0)

Anonymous Coward | more than 4 years ago | (#32389666)

thanks captain obvious. to the obviousmobile!

Re:Slashdot uses Akamai (0)

Anonymous Coward | more than 4 years ago | (#32389948)

You would think that this would make the pages load more quickly. Maybe it even does most of the time. But every time I notice slashdot taking a long time to load it is always one of the links that it is "stuck" or sluggish on (as you see them in the status bar). I wonder why that is? You'd think these large CDN's would be scaled to handle the load.

Most CDNs don't do this.. (3, Insightful)

poptix_work (79063) | more than 4 years ago | (#32389034)

While some shoddy CDN companies may reroute you at the DNS level, many are actually smarter about it. Smart systems will redirect you to a 'closer' system via a different URL for media files, or utilize anycast BGP routing so that you always take the shortest path to one of their nodes.

As for 'who serves stuff on CDNs that I want to see anyway' -- everyone. From porn sites to Google to Youtube, they're all one type or another of CDN.

Re:Most CDNs don't do this.. (2, Informative)

mother_reincarnated (1099781) | more than 4 years ago | (#32389746)

Ok so by "shoddy CDN companies" you mean every CDN anyone here has ever heard of? And the vast majority of enterprises that have hot/hot (public) datacenters?

Using anycast for serving content is a guarantee of fail. Great for DNS, less than ideal for HTTP. How serious a failure depends on important reliable and consistent end user experience is. Using geolocation based on the actual source address for content within the pages is a very intelligent thing to do in addition to doing it at the LDNS level initially.

On the innertubes anycast is good for things for which UDP is appropriate (even if they use other transports), and it can be acceptable for HA between a hot and a warm datacenter, but it's just not robust enough for a "CDN".

Questionable testing method (1)

Florian Weimer (88405) | more than 4 years ago | (#32389048)

Two things make those numbers fairly irrelevant: CDNs are optimized for delivering content to end users, not datacenters (where most machines are non-Windows anyway, so you don't even need AV updates). And what matters in the end aren't ping times, but actual request latency.

Uptime (2, Insightful)

h4rr4r (612664) | more than 4 years ago | (#32389114)

Considering TWC can't keep their DNS servers up reliably using them is not even an option.

Re:Uptime (1)

Shakrai (717556) | more than 4 years ago | (#32389272)

That's another reason not to use them. When I experimented with DD-WRT to save electricity (my other router is a full fledged Linux box that uses ~50 watts) I had it set to use TWC's DNS servers. Had forgotten how often they went down until I found myself using them again.

Pathetic. How hard is it to keep BIND running?

All the more reason to block. (1)

Jane Q. Public (1010737) | more than 4 years ago | (#32389124)

Use NoScript and / or RequestPolicy, which let you allow the CDNs you want, block those you don't. And have the additional side benefit of blocking tracking cookies and other such nastiness from companies you don't like (DoubleClick, Google Analytics, etc.)

This is not accurate (5, Informative)

davidu (18) | more than 4 years ago | (#32389146)

I'm the founder of OpenDNS (and long-time slashdot reader).

This article is not very accurate for a number of reasons. First, both my service (OpenDNS) and Google's are co-located in similar POPs to all of the major CDNs which causes this problem to be largely avoided. The author of the blog post used a tiny sample size and tested mainly from EC2 instances, neither of which helps his cause.

1) EC2 instances are BY DESIGN not co-located in the same place as major peering infrastructure because that real estate costs more. They are one or two hops away. People use EC2 for compute power, not for routing performance. So he needs to use something like Keynote or Gomez to test from home connections. If he had, he'd see it doesn't impact anything, and often improves performance, especially in the US. We don't have POPs in Asia yet, though they are coming this year, and when we do, we'll improve things for him.

2) Akamai is the only CDN where this will ever be perceptible because their deployments are so dense. They have 3000+ pops which means they will also be able to target more precisely. But this is being worked on RIGHT NOW in the IETF -- []

Anyways, this is really not the issue the author makes it out to be, and for the edge cases, they are being worked on.


Re:This is not accurate (2, Interesting)

funfail (970288) | more than 4 years ago | (#32389336)

Hi David. Isn't it possible for you to just cooperate with Akamai and resolve according to the client location based on IP address?

Re:This is not accurate (1)

cynyr (703126) | more than 4 years ago | (#32389508)

to flame you, Why cooperate with online advertisers, i really wish that i could block ads, pre-render instead of post in chrome, would make webpages nice and snappy, and save me some bandwidth.

Re:This is not accurate (1)

GIL_Dude (850471) | more than 4 years ago | (#32390000)

pre-render blocking does indeed save you some bandwidth and can make pages render more quickly. That's my main reason that I use FF. However, some folks may WANT to do post-render. The reason (a bit on the shady side though) is that downloading the ad and just not seeing it will:

1) Give the site you are visiting revenue
2) Cost the advertiser in bandwidth even though you don't have to see the ad.

Some folks have gone to Chrome preferentially so that they can block post render and allow their favorite sites to get revenue without having the bother of the ads blemishing the page.

Re:This is not accurate (4, Informative)

davidu (18) | more than 4 years ago | (#32389528)

yep! This is the exact goal of the IETF draft I linked. Unfortunately the old guard of DNS (Vixie, et al) are not supporting it because they fear it raises insurmountable privacy concerns. Most disagree since the ultimate authority will see the clients IP eventually, but that's the current hold up. Not sure if it can be resolved to everyone's satisfaction. :-(

Re:This is not accurate (0)

Anonymous Coward | more than 4 years ago | (#32389360)

"and long-time slashdot reader" says the guy with a 2 digits id :)

seriously thanks for the insight, that's the kind of post that made me hang on to /. for a long time too (even as an AC)

Re:This is not accurate (0)

Anonymous Coward | more than 4 years ago | (#32389396)

I'm the founder of OpenDNS (and long-time slashdot reader).

Wow, a two-digit Slashdot id? And a low one, even? Yeah, you certainly aren't joking about being a long-time reader!

Re:This is not accurate (0)

Anonymous Coward | more than 4 years ago | (#32389482)

"(and long-time slashdot reader)."

18? long time is a bit of an understatement, you're practically a slashdot founder with that id. I've never seen one that low.

Re:This is not accurate (2, Interesting)

Anonymous Coward | more than 4 years ago | (#32389530)

awesome! thank you for your reply. BUT wouldn't giving client ip away in the dns request reduce privacy?

Re:This is not accurate (2, Interesting)

davidu (18) | more than 4 years ago | (#32389566)

That's the argument opponents make. I don't buy it for a variety of reasons. Hard to write it on my iPhone but will blog about it soon.

Re:This is not accurate (1)

mother_reincarnated (1099781) | more than 4 years ago | (#32389936)

Because the organization that runs the authoritative DNS isn't going to see your source IP in a fraction of a second when you make the connection to their (in this case) web server?

Re:This is not accurate (5, Informative)

arkhan_jg (618674) | more than 4 years ago | (#32389726)

You know, I'd thought I'd actually try it out for myself with a rough and ready test. I have an ISP that gives me multiple real IP addresses, so I stuck my PC on the DMZ with a real IP, and tested each of the DNS servers as the sole DNS server in windows, without using either my local dnsmasq local cache or the one on my router. Obviously, I flushed windows own DNS cache between each ping test. The results are below, make of them what you will.

I also tested all DNS providers with both primary and secondary servers; since the 2ndary servers always gave me the same IP address as the primary, they're not included. Ping times are a simple 0DP average of two sets of 10 pings (and there were no odd spikes, with my connection otherwise idle)

First though, the response times of the DNS servers themselves, average uncached - tested using GRC's DNSBench.
aaisp is my own ISP, BT is a large ISP in my country, is one which I'm using at the moment, having previously tested it as fastest.

google ( 156ms
opendns ( 176 ms
aaisp ( 115 ms
BT ( 71ms
level 3 ( 95ms

Then, testing which CDN server each DNS server sends me to, and the ping times of those servers - I used the same CDN DNS names as the article;

First, (internap):

google resolves as, ping 167ms
opendns resolves as, ping 15ms (!)
aaisp resolves as, ping 82 ms
BT resolves as, ping 81ms
level 3 resolves as, ping 81ms

Then (akamai):

google resolves it as 92,122,217,75, ping 22ms
opendns resolves as, ping 15ms
aaisp resolves as 92.122,208.106, ping 13ms
BT resolves as, ping 14ms
level 3 resolves as, ping 15 ms

However you slice it, google's public DNS is a bad choice for me. Longer to resolve addresses, and it sends me to non-optimal CDN servers. OpenDNS is a mixed bag; slower resolution than the rest, but sends me to easily the most optimal server (shame about the redirected NXDOMAIN problem). Yet BT are the fastest DNS resolver of all, and still return decent results. Go figure; I thought they'd be overloaded and well, crap.

I'm definitely going to have to further testing for my own personal use, using whole page rendering on my favourite sites to see what is actually the best option for me personally, as DNS resolver speed clearly isn't the whole story in this CDN world.

OpenNIC published server locations... (3, Informative)

pongo000 (97357) | more than 4 years ago | (#32389194) those in the know can select the nameserver(s) closest to them [] without having to depend upon a 3rd party to determine (sometimes erroneously) what servers are closest.

It's a race... (1)

pongo000 (97357) | more than 4 years ago | (#32389242) see who has the balls to announce to the /. world that they don't know what CDN stands for!

I win! []

Re:It's a race... (1)

0racle (667029) | more than 4 years ago | (#32389448)

Canadians slow everything down.

Teapots and tempests (0)

Anonymous Coward | more than 4 years ago | (#32389354)

At the risk of being flamed by CDN advocates (if they exist): just who gives a rat's patoot anyhow?

First off, the test and what it means. I like stuff like this because it is detailed analysis and shows something about what is going on at a very basic level. The primary question though, is "how fast am I getting my content?" not "how fast is DNS responding?" or "am I going to the server they want?". I know from experience that DNS ping time is not proportional to web page load time. From experience and specific circumstances excepted, CDN effect on the user experience is almost always less then you would hope for.

Second, the onus is on the content deliverer to ensure that the user experience is acceptable no matter what. CDNs don't enhance DNS: they violate conventions and rules. When setting up a CDN this has to be kept in mind; if you ignore it then you will lose traffic. If the user goes to a valid DNS that is abiding by the RFCs and conventions and doesn't get the content then it is the fault of the CDN designer.

Third, the reason for CDNs is not always primarily to speed up the user experience even though that may be lauded as the number 1 goal. CDNs are used to cut cost. They shape traffic so that peak load handling is lower at the network and server level. This is big $$$. For a user in Orilla ON it doesn't really make a difference if the server is in New York or Vancouver or San Diego but it probably matters to the server side where he/she goes. The wider afield you go the more performance is a potential issue but again, whether the server my browser connects to is in London or Tokyo, my experience is likely to be similar.

This is or should be a non-issue from a user perspective.

CDN designers and web cache implementers do have to be aware.

already fixed (1)

uolamer (957159) | more than 4 years ago | (#32389372)

"However, as more websites, particularly smaller ones, use content distribution networks via embedded ads, widgets, and other assets..."

Like many people reading this site I block most the crap mention here at a level where the DNS is never resolved.

Latency (1)

Peach Rings (1782482) | more than 4 years ago | (#32389388)

You want lower latency, not higher latency. Thanks soulskill.

Re:Latency (1)

grahammm (9083) | more than 4 years ago | (#32389526)

For serving bulk (ie large) or streaming content, which is what CDNs are used for, then latency does not really matter. As long as the TCP window is large enough and there are not too many dropped packets (resulting in re-transmission) then a high latency link can delivery a high throughput. For streaming you also want low jitter (ie a reasonably constant RTT), but this is not related to latency. It is only for interactive connections, such as VOIP, ssh, and gaming, for which CDN are not used, that you need low latency.

Re:Latency (1)

socsoc (1116769) | more than 4 years ago | (#32389734)

Thus the point being that the two combined are not as beneficial as one would hope. You misread the headline.

Rmod Down (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#32389564)

with the number bottoms butt. Wipe endless conflict For 1t. I don't direct orders, or

going wrong (1)

DaveGod (703167) | more than 4 years ago | (#32389680)

I'm using Open DNS and since yesterday Google keeps offering to translate everything into Dutch (I'm in UK)

This is old news! Here's the solution (0)

Anonymous Coward | more than 4 years ago | (#32389896)

"...used to bypass the sluggishness..."? (1)

oDDmON oUT (231200) | more than 4 years ago | (#32389946)

I use Google DNS to bypass the interstitial ad results page my ISP pops up with any "incompletely typed" (i.e. I didn't type .com/.net/etc.) or mistyped URL.

Since I rarely if ever click on widgets, ads or other assets, I doubt that any lag time in response would make a material difference to me (nor, I suspect, would it to many others).

If they're upset by this, the resolution's easy! (0)

Anonymous Coward | more than 4 years ago | (#32390046)

They should run their own open DNS resolvers.

They've got the motivation and the infrastructure.

Geographically nearby is BS outside America (1)

DNS-and-BIND (461968) | more than 4 years ago | (#32390082)

I love how "geographically aware" applications will happily direct me to Japan or Taiwan when the link from America is far faster. Why the hell should something route me to Japan when I start from Thailand? Or route to Taiwan from China? WTF? I suppose in some people's tiny minds, this makes sense, but in reality the USA link is usually much faster.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?