×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Google and OpenDNS Work On Global Internet Speedup

samzenpus posted more than 2 years ago | from the greased-lightning dept.

The Internet 151

Many users have written in with news of Google and OpenDNS working together on The Global Internet Speedup Initiative. They've reworked their DNS servers so that they forward the first three octets of your IP address to the target web service. The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache. From the article: "In the case of Google and other big CDNs, there can be dozens of these local caches all around the world, and using a local cache can improve latency and throughput by a huge margin. If you have a 10 or 20Mbps connection, and yet a download is crawling along at just a few hundred kilobytes, this is generally because you are downloading from an international source (downloading software or drivers from a Taiwanese site is a good example). Using a local cache reduces the strain on international connections, but it also makes better use of national networks which are both lower-latency and higher-capacity."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

151 comments

New GNAA President paz is Elected (-1, Troll)

TrisexualPuppy (976893) | more than 2 years ago | (#37255268)

Saturday, August 27, 2011

New GNAA President paz is Elected
paz - Camden, New Jersey

Camden, New Jersey - The winds of change are blowing, and it smells like toots. After a century of inactivity, dick waving, cock sucking, infighting, and bzb, it's time for a new breed of gay niggers to arise. There are a few changes that will be taking effect, now that I hold the position of philosopher-god-king:

* The dark days are over. #GNAA is no longer a mere chat room, nor is it your personal hugbox. Anyone deemed to be worthless or unfunny will now be immediately removed from the channel. The following things will not be tolerated: ED nerds, OhInternet! contributors, channers, #stress lunatics, or #anti sycophants.

* The membership system is being reinstated. To petition for membership, you must contact an official member of the GNAA (a user with operator status) and schedule an interview. You will be tested on a variety of things, including: creativity, hilarity, charisma, and technical prowess. From then on, a cabal of card-carrying gay niggers will take a vote on whether or not to initiate you into the order. Those deemed worthy will be taken through a live initiation ceremony on KLULZ internet radio.

* As president, I will be hosting a weekly internet radio program from my professional irc studio in the heart of crack infested Camden. The content of the program will include: GNAA news, music (including homemade GNAA propaganda tunes), racially charged tirades, and updates on the various trolls that members of the channel have accomplished, with congratulatory words and shout-outs for outstanding examples of gayniggerdom.

* Members may have certain responsibilities bestowed upon them, for the sake of channel efficiency. For example: writing press releases, target hunting, ANSI creation etc. Of course anyone who wishes will be able to participate in these activities as well, provided the content you provide is sufficiently hilarious.

* The creation of smaller, GNAA affiliated cells engaging in certain focused tasks will be encouraged. If you have an idea for a troll and would like to carry it out with a group of specialized individuals, you simply have to run it by me and it will be officially sanctioned.

To put it simply, it's time to troll. #GNAA has been painfully unfunny for far too long, and it's time to crack down and become a well-oiled and efficient machine. With an iron fist and a cock hard as diamonds, I will lead you all to glory and hilarity. Heil hitler, heil victory, heil gayniggerdom.

About paz:

An infinitely handsome and charismatic individual, not to mention a vigorous lovemaker, who is now your fucking president.

About GNAA:
GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the first organization which gathers GAY NIGGERS from all over America and abroad for one common goal - being GAY NIGGERS.

Are you GAY [klerck.org]?
Are you a NIGGER [mugshots.org]?
Are you a GAY NIGGER [gay-sex-access.com]?

If you answered "Yes" to all of the above questions, then GNAA (GAY NIGGER ASSOCIATION OF AMERICA) might be exactly what you've been looking for!
Join GNAA (GAY NIGGER ASSOCIATION OF AMERICA) today, and enjoy all the benefits of being a full-time GNAA member.
GNAA (GAY NIGGER ASSOCIATION OF AMERICA) is the fastest-growing GAY NIGGER community with THOUSANDS of members all over United States of America and the World! You, too, can be a part of GNAA if you join today!

Why not? It's quick and easy - only 3 simple steps!

  • First, you have to obtain a copy of GAYNIGGERS FROM OUTER SPACE THE MOVIE [imdb.com] and watch it. You can download the movie [idge.net] (~130mb) using BitTorrent.
  • Second, you need to succeed in posting a GNAA First Post [wikipedia.org] on slashdot.org [slashdot.org], a popular "news for trolls" website.
  • Third, you need to join the official GNAA irc channel #GNAA on irc.gnaa.eu, and apply for membership.

Talk to one of the ops or any of the other members in the channel to sign up today! Upon submitting your application, you will be required to submit links to your successful First Post, and you will be tested on your knowledge of GAYNIGGERS FROM OUTER SPACE.

If you are having trouble locating #GNAA, the official GAY NIGGER ASSOCIATION OF AMERICA irc channel, you might be on a wrong irc network. The correct network is NiggerNET, and you can connect to irc.gnaa.eu as our official server. Follow this link [irc] if you are using an irc client such as mIRC.

If you have mod points and would like to support GNAA, please moderate this post up.

.________________________________________________.
| ______________________________________._a,____ | Press contact:
| _______a_._______a_______aj#0s_____aWY!400.___ | Gary Niger
| __ad#7!!*P____a.d#0a____#!-_#0i___.#!__W#0#___ | gary_niger@gnaa.eu [mailto]
| _j#'_.00#,___4#dP_"#,__j#,__0#Wi___*00P!_"#L,_ | GNAA Corporate Headquarters
| _"#ga#9!01___"#01__40,_"4Lj#!_4#g_________"01_ | 143 Rolloffle Avenue
| ________"#,___*@`__-N#____`___-!^_____________ | Tarzana, California 91356
| _________#1__________?________________________ |
| _________j1___________________________________ | All other inquiries:
| ____a,___jk_GAY_NIGGER_ASSOCIATION_OF_AMERICA_ | Enid Al-Punjabi
| ____!4yaa#l___________________________________ | enid_al_punjabi@gnaa.eu [mailto]
| ______-"!^____________________________________ | GNAA World Headquarters
` _______________________________________________' 160-0023 Japan Tokyo-to Shinjuku-ku Nishi-Shinjuku 3-20-2

Copyright (c) 2002-2011 Gay Nigger Association of America [www.gnaa.eu]

Re:New GNAA President paz is Elected (0)

Anonymous Coward | more than 2 years ago | (#37255278)

Why does the GNAA website have a .eu domain name?

Re:New GNAA President paz is Elected (2)

bonch (38532) | more than 2 years ago | (#37255306)

Looks like someone forgot to check the little box marked "Post Anonymously."

Re:New GNAA President paz is Elected (1)

royallthefourth (1564389) | more than 2 years ago | (#37255400)

If you had read it, you would understand that his post is the second step in applying for GNAA membership.

Re:New GNAA President paz is Elected (0)

Anonymous Coward | more than 2 years ago | (#37255428)

Doesn't say anything about posting it under a registered username.

Re:New GNAA President paz is Elected (-1)

Anonymous Coward | more than 2 years ago | (#37255426)

And this is exactly what you come to expect from slapshot these days the mods will harass the living crap out of people with something to say that has point and reason but shite like this is allowed to get thru untouched i think the mods need to fuck off and get someone in that can actually do the frigging job with a modicum of common sense
PGN ,Linuxman ,P ,Linuxonian

slashdot mods need sacking

only a few 100 KBs, WTF?!? (0)

Anonymous Coward | more than 2 years ago | (#37255292)

I never get my downloads that slow when downloading from Taiwan...

Oh ya, that's right, I am in Taiwan...

Akamai (1)

Anonymous Coward | more than 2 years ago | (#37255296)

Akamai has been doing this the proper way for years now.
I prefer this is implemented by the content provider, not by the ISP

Re:Akamai (1)

PPH (736903) | more than 2 years ago | (#37255446)

If I understand TFS correctly (and sometimes I don't), it is the target web service that is selecting the local cache for you based on your IP address. So that shouldn't be a problem. Now, if OpenDNS selects it, there could be a trust problem (someone could hijack OpenDNS or they could turn evil). But that's not how I read the description.

What I'm not understanding is the speedup part. When I open an HTTP connection to some web service, they should already have 'my' IP address (issues of NAT, tunneling, TOR exits, etc. notwithstanding). So, why the extra step?

Network-topology-aware round-robin DNS (2)

tepples (727027) | more than 2 years ago | (#37255514)

When I open an HTTP connection to some web service, they should already have 'my' IP address

By the time you make an HTTP connection, you've already chosen which mirror of the web service to use. According to the article, this spec would allow DNS servers, such as an ISP's DNS server resolving on behalf of the ISP's customers, to use the prefix of each user to determine which mirror to recommend. It's like a network-topology-aware version of round-robin DNS.

Re:Network-topology-aware round-robin DNS (1)

petteyg359 (1847514) | more than 2 years ago | (#37255790)

By the time you make an HTTP connection, you've already chosen which mirror of the web service to use.

Wrong. There are many requests and responses made in any transaction, and HTTP has support for this strange thing called "redirect". Even if your first connection is to a server in Djibouti, you may be redirected to a server in Canada, and then that one may again redirect you to a server in Sweden, where you'll finally be given the resource you requested.

Re:Network-topology-aware round-robin DNS (1)

NatasRevol (731260) | more than 2 years ago | (#37255954)

Which means EVERY http server has to be set up like that.

Here, google's DNS & openDNS is set up like that & it servers millions of users. Which is slightly more effective AND efficient.

Re:Network-topology-aware round-robin DNS (3, Funny)

Score Whore (32328) | more than 2 years ago | (#37256408)

You aren't understanding what is going on here. All Google and OpenDNS are doing is providing the authoritative DNS server with the IP address of the client. Google/OpenDNS know nothing about any possible caching or local servers. They are just making it possible for the final DNS server to possibly, assuming that whoever owns the domain you are resolving supports some kind of CDN, send you to a nearby server.

What likely really happened here is this:

Akami: Hey Google! You're compulsion to violate everyone's privacy is fucking up the Internet by breaking all of the CDN/Geo-based services.
Google: But... but... we must know everything about everyone otherwise we'll not be able to sell them to our customers.
Akami: Whatever dude, you're causing 50% congestion on the backbone links because your shit hides the actual address of the client.
Google: Well, we've got sack-fulls of PhDs. We'll find a solution that allows us to keep spying and selling and still allows the CDNs to work.
Akami: Look you jackasses, the system that exists today works. Your crap is just causing problems and labor for everyone else.
Google: Na-na-na-na-na-na-na-na I can't hear you.
Akami: Seriously. You're being a dick.
Google: No. We've just invented this awesome thing that is going to Speed-Up! The InterNets!
Akami: Jesus Christ! The system that was there before you broke it works. Why the fuck do you have to keep breaking shit?
Google: Because we can and people are stupid. And we are rich. So STFU.
Akami: Fuck. There's no reasoning with these clowns.

Re:Network-topology-aware round-robin DNS (2)

afidel (530433) | more than 2 years ago | (#37258058)

I hope this is made into an RFC and MS adopts it. We run our own primary DNS servers because AT&T has been WAY too slow to respond to security issues with their DNS resolvers and AFAIK they still don't properly handle DNSSEC requests. We tried using Google as forwarders but at least at the time they were much slower than running our own primaries (especially once cache warmed). The one drawback has been the fact that we don't always get directed to the most local server by content delivery networks (youtube is especially bad, we often can't watch an HD stream despite sitting on a mostly unused DS3). The fact that CDN's don't work correctly unless you use your ISP's DNS resolvers is not a problem of Google's making but rather of the way CDN's have chosen to design their solutions.

Re:Network-topology-aware round-robin DNS (2)

Runaway1956 (1322357) | more than 2 years ago | (#37256508)

I refuse most redirects. Unless and until I examine them, redirects are pretty much dead-ends. I've not yet found such an addon for Chromium, but Firefox has had it for a long time now. Call me paranoid, but I really don't like content being downloaded to my machine from places that I've never heard of. I want all my content to come from the server on which I landed when I clicked the link. Cross site scripting is also blocked on my machine. FFS, do you have any idea what those two vectors are capable of causing? People with a clue should block all that nonsense, as part of their layered defense against malware and other exploits.

To save two TCP setups and teardowns (4, Informative)

tepples (727027) | more than 2 years ago | (#37256522)

Even if your first connection is to a server in Djibouti, you may be redirected

Which costs a TCP setup and teardown to Djibouti.

to a server in Canada, and then that one may again redirect you to a server in Sweden

Which costs a TCP setup and teardown to Canada.

Re:Network-topology-aware round-robin DNS (2)

DavidTC (10147) | more than 2 years ago | (#37256140)

No, this isn't for ISP DNS, which baffled me at first also. ISP DNS servers almost always use the same 'outgoing path' as your traffic, and hence get handed the same mirror that you yourself would have been handed.

In fact, this often gives better results than 'correct' geolocation, as DNS servers are usually closer to the network boundary. If the geolocation shows I'm in Tennessee, they might direct me to a mirror in Nashville...but what if my ISP actually connects to the internet via Atlanta, and now instead I'm going from Tennesse to Atlanta back to Nashville? However, the ISPs DNS server are likely to be in Atlanta, as they are usually right at the internet connection, so if they ask for the DNS, they get the Atlanta mirror.

That's not what this article is talking about. This article is talking about the opposite problem, for things like OpenDNS or google DNS, where the DNS server will be somewhere else entirely from the user. Google has DNS servers...and they're in one location, not at your ISP. Everyone who currently uses them gets sent to the closest mirror to that DNS server, which is clearly stupid.

That said, as almost everyone uses their ISPs DNS, calling this a 'global internet speedup' is pretty silly. It's a speedup for the few nerds who use OpenDNS or Google DNS.

Re:Network-topology-aware round-robin DNS (1)

MikeURL (890801) | more than 2 years ago | (#37257124)

Well that clears things up. I was wondering how the heck an external service could be more location aware than my ISP who knows precisely where I am.

In terms of the whole internet it might speed up a bit even it is only a small # of people using Google's and OpenDNS's DNSs.

Re:Network-topology-aware round-robin DNS (1)

teftin (948153) | more than 2 years ago | (#37257198)

ISPs are quite often giving their resolvers IP addresses to CDN providers for better balancing.

P2P (1)

anonymousNR (1254032) | more than 2 years ago | (#37255302)

How is this different from P2P ?

Re:P2P (0)

Anonymous Coward | more than 2 years ago | (#37255414)

Well, P2P is a network of clients that act on equal footing (hence, "peer"), that transfer data amongst each other (hence the "2"). This is a bunch of severs, clients, and intermediating DNS servers routing clients to more local hosts. Basically everything is different from P2P.

Re:P2P (2)

alfredos (1694270) | more than 2 years ago | (#37255468)

Very. It's content delivery in real time, latency optimized, what this is about. P2P, using residential low-speed links and desktop computers, is simply not suited for the task.

What is not is new. Distributed content caches were all the rage at the end of last millennia - everybody remembers about Squid, I guess? - and DNS geo load balancing (including fancy boxes with large price tags), and all that stuff. Ever fatter pipes have always reduced the need for this sort of solutions and my guess is that it will continue to be the case.

Multicast is another example of a technology that was created to improve content delivery, basically for audio and video. Almost nobody uses it. Instead, CDNs distribute unicast feeds globally. It was also a good idea, but it required a ton of resources and a different thinking than traditional routing.

Re:P2P (2)

Aqualung812 (959532) | more than 2 years ago | (#37255746)

Ever fatter pipes have always reduced the need for this sort of solutions and my guess is that it will continue to be the case.

That would be correct if ISPs were upgrading their big lines at the same rate they're upgrading their customer facing lines.

Unfortunately, they're letting their peering lines get oversubscribed, but selling space to CDNs.

Re:P2P (1)

EdZ (755139) | more than 2 years ago | (#37256038)

Just like the flip-flopping between THIN CLIENTS ARE THE SOLUTION! and ALL POWER TO THE WORKSTATIONS! every few years - at the moment, we're in a thin-clients + remote server phase ('the cloud') - there's a general flip-flopping between localised caching when the content served grows larger than the available bandwidth (the current case with streaming video) and once more bandwidth comes online the content servers become centralised again.

Such is the circle of life!

Re:P2P (1)

m50d (797211) | more than 2 years ago | (#37256114)

IIRC the main problem with multicast was it required all the routers in between to support it, and why would you upgrade something that was already working. Maybe with IPv6 we'll get working multicast, since that requires replacing/upgrading all your routers anyway.

Re:P2P (2)

Aqualung812 (959532) | more than 2 years ago | (#37256542)

No, the problem with multicast is that the big providers wanted more money based on bandwidth & multicast would severely cut bandwidth need.

As you can see with AT&T and Comcast, they already use multicast internally, as do almost all video over IP providers. They just don't want to share it.

Re:P2P (2)

nschubach (922175) | more than 2 years ago | (#37257084)

I always thought multicast meant that you'd have to be downloading the same bits as everyone else at the exact same time so you'd have to sit around and wait for a multicast session to open up to start receiving your data. It would be great for broadcast television where everyone tunes in to a specific channel to get what they want, but it would be fairly useless if someone connected 5 minutes after the multicast started unless the data packet they were receiving supported the ability to pick up 5 minutes in. You'd still not have the first 5 minutes of the broadcast though.

It's great for local networks where you need to transfer large amounts of data to large amounts of PCs, but they all have to coordinate to get the entire package.

Re:P2P (1)

alfredos (1694270) | more than 2 years ago | (#37257348)

Your first statement is true. As for IPv6, I don't see IPv6 as making a case for multicast globally, perhaps unfortunately. Even the contrary may be true: Since deploying IPv6 costs large sums of money, multicast will have to wait. Also, multicast and IPv6 have little in common that can create synergies; it's not as if IPv6 brought by design a multicast proposal that was radically more convenient than with IPv4.

Re:P2P (0)

Anonymous Coward | more than 2 years ago | (#37256492)

If you have a 20Mbps connection or better, you'll frequently run into this if you don't live in the same timezone as the server you access. On the west coast, the best you get from NYC, and Montreal is about 1Mbit. The reason is MTU and Latency. California to New York is 75ms at the minimum. Very basic math tells you that 1000ms/latency in ms = maximum bandwidth under ideal conditions. East to West is 75ms, so you end up with maybe 7Mbps. Even if you have a 100Mbps FiOS.

You can't do anything about the speed of light, but you can make the channel wider. 1500MTU Fast Ethernet vs 9KB MTU GigE.

P2P mitigates the problem for fixed assets as long as there are peers in your immediate area, otherwise your p2p client will pull the data from any/all available peers, so it's not latency aware, and the entire reason ISP's have to throttle it is because of this behavior. If a properly seeded file had 1 peer in every major city, it would still pull all the neighboring cities peers. The reason you can max out the bandwidth is because you're connecting to enough peers. If all the peers were on the east coast and you're on the west coast, you need 10 times as many peers.

CDN's work from the other side of the equation, there are machines located in data centers in major transit hubs (San Francisco, New York, Tokyo, London, etc) or even colocated at the ISP's. So instead of trying to connect to 20 machines 1000 miles away to pull 100Mbps, you only need to connect to 1 machine that is less than 5 miles away. CDN's however generally aren't configured to deliver files in pieces unless it's streaming video. So you do typically end up stuck always having to download the file from the beginning since you didn't connect to the same CDN server you did last time.

A hybrid approach requires some kind of HTTP-TorrentCDN system where (similar to napster) there are servers that have the most requested content and cycle through backends and files users already have (peers) that are in the same geolocation sector. This solves numerous problems.
- No more MPAA/RIAA/ESA/etc piracy issues, since the CDN servers only mirror content they've been told to mirror, files can be made to disappear by simply having the source host purge the file.
- No more privacy invasion, since you might download a file from CDN-Montreal, but the content's home machine is located in NYC, home ISP will only see the CDN's IP address.
- No more "plz seed" issue since the host and CDN will always have the file available
- No more "speed sux" since any file that is too popular for the CDN to deliver will make use of peers with incomplete parts of the file to fill in the pieces for each other.

The trade off's are also fairly dire too
- High latency on small files, this works for large files (movies, music, flash cartoons, etc) but most sites consist of dozens of tiny files that would be delayed significantly. The current practice with CDN use is to direct small images to the same host (so that the connection isn't torn down,) and larger files to separate machines.
- MITM attacks
- Advertising revenue based sites are harmed by having peers connect without seeing the ads. And this is why it will never take off.

Little brother (2)

lucm (889690) | more than 2 years ago | (#37255354)

> The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache

This will make censorship much easier. No more corrupt foreign data in [your favorite oppressive country].

Re:Little brother (1)

Bucky24 (1943328) | more than 2 years ago | (#37255408)

Heh, before I even got to the comments I thought "I know someone is gonna talk about how this makes it easier for privacy violations". Not to say that you're wrong though....

Re:Little brother (1)

vlm (69642) | more than 2 years ago | (#37255868)

> The service then uses your geolocation data to make sure that the resource you’ve requested is delivered by a local cache

This will make censorship much easier. No more corrupt foreign data in [your favorite oppressive country].

If my assumption is correct that they're basically using cnames to generate geo-locatable new host names, you could just distribute the knowledge than in Afghanistan / GB / GER, you simply need to visit 4.4.4.youtube.com to gain "usa" access to youtube.

Re:Little brother (1)

Catnaps (2044938) | more than 2 years ago | (#37256994)

What? It makes it easier to avoid censorship. Remind yourself of what Google did with China and the .hk redirect.

Re:Little brother (2)

MatthiasF (1853064) | more than 2 years ago | (#37258244)

Makes it almost impossible to remain anonymous by proxy or Tor if you use the service, or if DNS servers start to enforce the behavior Google is suggesting in their IETF brief.

Seems like when you do a DNS lookup, your octets get sent by yourself or the proxy, and then Google and friends see's a request for that domain in Analytics or what not from the proxy. Not going to take much to attach one piece of information to the other, or pull together patterns from Analytics from the DNS responses for the proxy to guess which requests are to particular guestimated individuals. Add on Adsense and you're even more screwed.

Anyway, I'm dropping OpenDNS at home because of this. I don't agree that it's necessary and can be more harmful than helpful. I'm not worried about a download completing faster so much as someone having a dossier on me hidden away on some server.

OpenDNS was nice to use as a layer of avoiding malware but it's usefulness disappears when it starts doing stuff I don't appreciate.

Re:Little brother (0)

Anonymous Coward | more than 2 years ago | (#37258456)

Why not just drop analytics and adsense instead? You have a /etc/hosts file, so use it.

Ehh.... this is ok, but .... (2, Interesting)

King_TJ (85913) | more than 2 years ago | (#37255456)

Isn't this little more than an expensive band-aid for the underlying bandwidth problem? Delivering content from strategically located caches is an OLD concept, and it's always been trouble-prone, with some sites not receiving updated content in a timely manner and others getting corrupted.

Quite frankly, I wish some of the big players with vested commercial interests in a good-performing internet (like Google, Amazon, or Microsoft) would pitch in on some investment funding to upgrade the infrastructure itself. I know Google has experimented with it on a small scale, running fiber to the door in a few U.S. cities. But I'm talking about thinking MUCH bigger. Fund a non-profit initiative that installs trans-Atlantic cables and maintains them, perhaps? If a nation wants to censor/control things, perhaps they'd reject such a thing coming to their country, but that's ok.... their loss. Done properly, I can see it guaranteeing a more open and accessible internet for all the participants (since presumably, use of such circuits, funded by a non-profit, would include stipulations that the connections would NOT get shut off or tampered with by government).

Re:Ehh.... this is ok, but .... (0)

Anonymous Coward | more than 2 years ago | (#37255604)

If a nation wants to censor/control things, perhaps they'd reject such a thing coming to their country, but that's ok.... their loss.

You mean like, EVERY FUCKING NATION ON THE PLANET?

Re:Ehh.... this is ok, but .... (1)

MozeeToby (1163751) | more than 2 years ago | (#37255646)

Quite frankly, I wish some of the big players with vested commercial interests in a good-performing internet (like Google, Amazon, or Microsoft) would pitch in on some investment funding to upgrade the infrastructure itself.

You'd have several hundred lawsuits from several dozen companies that have a vested interest in keeping control over the existing infrastructure. You'd have antitrust investigations being called for, contract lawsuits against the cities that promised them monopoly access, and several billion dollars poured into lobbying to make sure that it cannot and will not happen.

And keep in mind, on the low tiers the vast majority of laid fiber is dark, just waiting for someone to actually plug into it and use it. And I'd be shocked if that didn't include the transatlantic lines as well. On the higher tiers, many (most) cities don't have proper conduit installed, which means installing fiber to the front door is going to be an expensive, messy, time consuming process even if they got through the legal and bureaucratic nightmare.

Re:Ehh.... this is ok, but .... (1)

vlm (69642) | more than 2 years ago | (#37255950)

And I'd be shocked if that didn't include the transatlantic lines as well.

They're all lit all the time. Maybe just by second string "if we lose a strand, your strand becomes their protection ckt" but they'll be lit. I used to be in the telecom business.

Where you find dark fiber is on shorter local hops where there simply isn't the current demand, at any reasonable cost.

You'll never completely satisfy demand between stateside US-HI. You can supersaturate demand to the point of dark fiber between little-city and cow-village.

Re:Ehh.... this is ok, but .... (2)

afidel (530433) | more than 2 years ago | (#37258116)

I don't think that's true at all, many transoceanic links have dark pairs. I know the Google-cable was going to be only half lit at installation. The existence of dark fiber on transoceanic links is driven by many of the same economics as dark fiber on land, only magnified since the cables are so much more expensive, the installation makes trenching look cheap, and the lead times are measured in quarters instead of weeks.

Re:Ehh.... this is ok, but .... (0)

Dishevel (1105119) | more than 2 years ago | (#37255706)

Getting more bandwidth to more places is great.
Using the bandwidth you have efficiently is not a bad thing.
Not sure how efficient use of resources is a bad thing.

Re:Ehh.... this is ok, but .... (4, Interesting)

slamb (119285) | more than 2 years ago | (#37256028)

Isn't this little more than an expensive band-aid for the underlying bandwidth problem?

Keep in mind that Google, Amazon, Akamai, etc. had already created geographically distributed networks to reduce latency and bandwidth. Improving the accuracy of geolocated DNS responses through a protocol extension is basically free and makes these techniques even more effective.

Also, Google cares a lot about latency. A major component of that is backbone transit latency, and once you have enough bandwidth to avoid excessive queueing delay or packet loss, I can imagine only four ways to significantly it: invent faster-than-light communications, find a material with a lower refractive index than the optical fibers in use today, wait for fewer round trips, or reduce the distance travelled per trip. This helps with the last. Building more fiber wouldn't help with any of those and would also be a lot more expensive.

Full disclosure: I work for Google (but not on this).

Re:Ehh.... this is ok, but .... (0)

Anonymous Coward | more than 2 years ago | (#37256800)

No, they're trying to turn it into cable TV. One way delivery, only "approved" content providers get distributed, everything that's not on the official local CNS gets throttled way down. Easier censorship is just a side benefit

That's network neutrality for you (1)

Anonymous Coward | more than 2 years ago | (#37255462)

A network where only big players can afford fast delivery is not neutral. CDNs starve the actual network in favor of local caches. Money that would have to go to bandwidth improvements now goes to the CDNs, which in turn are only used by global players. This leaves us with an anemic network that can deliver Youtube clips quickly but chokes on broadband communication between individuals on different continents. And if that weren't bad enough, they abuse DNS to do it.

If I type google.com into my browser, I actually want to get an english web page from a computer in the USA, not a redirect to a page in the language of my country, and not a page served by a computer in my country either.

Cache poisoning in 3.. 2.. 1.. (1)

kheldan (1460303) | more than 2 years ago | (#37255466)

Sorry, I may just have woke up on the wrong side of the bed this morning.. but wouldn't doing as TFA suggests just open the door to another MITM attack method?

Re:Cache poisoning in 3.. 2.. 1.. (1)

Em Adespoton (792954) | more than 2 years ago | (#37256106)

Not really... but it would provide another Man-on-the-end attack method, where the owner of the DNS gets much more information and control regarding the endpoint. Someone who has a malicious DNS host would have much more control over how to direct traffic based on geolocation before you even reach their server. People from a specific location could be routed via a middleman to sniff/poison the data via this method.

A man in the middle already knows your entire IP, so they gain pretty much nothing here that they didn't already have.

So: possible attack: someone registers, say, www.facebook.co and runs the DNS. When people from Iran attempt to connect, the IP returned is for a proxy used to sniff/poison the connection, and the person is apparently redirected to www.facebook.com. When anyone else attempts to connect, the DNS passes the request to Facebook's DNS for a lookup on www.facebook.com. As a result, anyone outside of Iran investigating this fishy domain will get an apparantly standard connection to facebook, with the only warning bell being that facebook.co is registered to another entity.

Re:Cache poisoning in 3.. 2.. 1.. (1)

Em Adespoton (792954) | more than 2 years ago | (#37256144)

An added bonus for DNS providers is that they get a bit more telemetry on where lookups for domains are coming from.

Nice euphemism (0)

Anonymous Coward | more than 2 years ago | (#37255482)

"downloading [drivers] from an international source"

So that's what you kids are calling it these days. Heh.

Yes we are saving (1)

onepoint (301486) | more than 2 years ago | (#37255490)

What I like about this entire thing is that we can save on bandwidth, which should also lead to some power savings overall, not a lot, but just another drop, those drops could add up and become something useful in the future.

For IPv4? (1)

jfengel (409917) | more than 2 years ago | (#37255494)

I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

Re:For IPv4? (1)

vlm (69642) | more than 2 years ago | (#37255708)

I realize that IPv4 is going to be with us for quite some time, but is this going to be worth the effort? It requires a bit of jiggery-pokery to repoint your DNS, the kind of thing that appeals to the Slashdot crowd but which your grandma will never, ever pull off. ISPs could help, but will they do so before IPv6 makes it irrelevant?

Its done on their side. I'm reading between the lines and trying to unfilter the dumbed down journalist-speak, but I think I could implement the same thing by configuring about 16 million bind "views" for each of a.b.c in the source ip address range a.b.c."whatever". Then bind gives a different cname for the domain inside each view, like if my src addrs is 1.2.3.4 then their bind server barfs out a cname of 3.2.1.www.whatever.com whenever someone asks for www.whatever.com and some other sorta thingy decides where each of the 16 million c.b.a.www.whatever.com domains resolves to geographically, perhaps even somewhat automatically.

With ipv6 its somewhat simpler, if you (ISP) get a /40 or /32 or whatever, thats probably all you'll ever get. So you don't need to map out individual end user /64s, at least hopefully/probably not.

The problem with caching is random access drive speed etc has not been increasing as fast as bandwidth used. So where a slightly tuned up desktop made a decent tolerable usable cache around 2000, around 2010 to make the cache better than directly access, you need monsterous gear and complicated setups. Rapidly it becomes cheaper to buy more bandwidth.

The standard /. car analogy is much like the simplest cheapest and most reliable way to get 1000 horsepower is to buy a 1000 HP engine, not to fancy up a stock 100 HP civic engine. Or a better analogy might be if you wanna drive your car 400 miles across the desert, its probably a hell of a lot better engineering to install a 400 mile equivalent aux fuel cell/tank, than to deploy multiple "just-in-time" fuel caches using autonomously guided GPS equipped saturn 5 rockets along the route, although either technically would work, and the saturn 5 deployment would be very exciting and photogenic.

Re:For IPv4? (1)

LDAPMAN (930041) | more than 2 years ago | (#37257014)

"The problem with caching is random access drive speed etc has not been increasing as fast as bandwidth used. So where a slightly tuned up desktop made a decent tolerable usable cache around 2000, around 2010 to make the cache better than directly access, you need monsterous gear and complicated setups. Rapidly it becomes cheaper to buy more bandwidth."

Modern caching systems primarily return data from memory, not from disk. Even if you were to pull the data from disk, the disk is normally an order of magnitude faster than the internet connection.

Akami? (2)

vlm (69642) | more than 2 years ago | (#37255504)

Description seems a little simplified. Sounds like an Akami presentation from over a decade ago.

So is this a commercial competitor to Akami, or a non-commercial competitor, or a freeware / public competitor, or is it something somewhat different, like a squid proxy set up for transparent caching from 2002 or so?

Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

Re:Akami? (4, Informative)

Binary Boy (2407) | more than 2 years ago | (#37255678)

Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

http://wiki.squid-cache.org/Features/IPv6 [squid-cache.org]

Re:Akami? (-1, Flamebait)

vlm (69642) | more than 2 years ago | (#37255770)

Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

http://wiki.squid-cache.org/Features/IPv6 [squid-cache.org]

Thank you for researching that.

Hurray for squid-dy... I'll mosey on over to http://packages.debian.org/search?keywords=squid [debian.org] and install it... Oh... I see.

Apparently ver 3.1 will probably be release with the next version of Debian, but not yet available, not even in unstable. Till then its 2.7..

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=299706 [debian.org]

Re:Akami? (1)

Anonymous Coward | more than 2 years ago | (#37255920)

Yeah.... http://packages.debian.org/search?suite=default&section=all&arch=any&searchon=names&keywords=squid3

How's that thing called google getting along for you?

Re:Akami? (0)

Anonymous Coward | more than 2 years ago | (#37256058)

Oh... I see

Actually, you don't seem to see much. Squid3 3.1 has only been available since the end of March 2010

Re:Akami? (0)

Anonymous Coward | more than 2 years ago | (#37257138)

http://lmgtfy.com/?q=how+to+compile+squid

Squid is one of the easier ones to compile and install...

Re:Akami? (0)

Anonymous Coward | more than 2 years ago | (#37257292)

perhaps you missed this:
http://packages.debian.org/squeeze/squid3

Re:Akami? (0)

Anonymous Coward | more than 2 years ago | (#37257368)

It's Squid's fault Debian hasn't included the latest and greatest?
Mosey on over here to http://www.squid-cache.org/Versions/v3/3.1/ instead.

Re:Akami? (3, Informative)

lurp (124527) | more than 2 years ago | (#37255756)

None of the above. It's a scheme to pass your IP address to CDNs such as akamai so that they can select an edge server that's closer to you. Absent this, CDNs select an edge server closest to your DNS provider — that's fine if you're using your ISP's DNS, but in the case of an OpenDNS or Google Public DNS, that's likely a poor choice.

Re:Akami? (1)

x3dfxjunkie (1170551) | more than 2 years ago | (#37255926)

Speaking of squid, its 2011, is squid ever gonna support ipv6? There's not much software out there that doesn't support v6, and squid is probably the most famous.

Hell, I'd be happy if it used HTTP 1.1! It's only been a standard since June 1999 [ietf.org].

pl0s 4, Troll) (-1)

Anonymous Coward | more than 2 years ago | (#37255552)

EFNet servers. By the politickers there are Dying. All major A NEED TO PLAY Which aalows guys are usually The hard drive to Save Linux from a want them there. house... pathetic. spot when done For FreeBSD showed frOm the OpenBSD The latest Netcraft For membership. subscribers. Please new core is going MAKES ME SICK JUST chosen, whatever

conditional (2)

shentino (1139071) | more than 2 years ago | (#37255732)

I'm all for this if and only if all protocols are fully complied with.

HTTP gives plenty of leeway and in fact was designed with caching in mind. So long as the involved parties do not violate the protocol, I'm ok with it. Cache control directives must be honored, for example. No silent injection of random crap.

The DNS protocol must also be honored to the extent that deviations from same have not been expressly authorized. OpenDNS offers typo correction and filtering services on an opt-in basis. NXDOMAIN hijacking and whatnot is foul play.

Just don't fuck with the protocols, and you can do whatever you want.

Better idea (1)

Lawrence_Bird (67278) | more than 2 years ago | (#37255876)

How about we get rid of all the linkages to crap sites like facebook, digg, mysapce, etc. These are the links that are almost always responsible for slow page loads for me. I rarely have any issue at all with downloads or videos, local or international. Interesting too when using noscript and some webpages literally twitch when something like facebook is blocked as they try over and over to reload and reconnect. And as others have said, these local caches are just another security risk waiting to be exploited.

Alternately... (3, Insightful)

Score Whore (32328) | more than 2 years ago | (#37255884)

Google could just not provide a service that inserts themselves into the DNS path. The problem isn't "the internet" or DNS, it's that Google's DNS servers have no relationship to the client systems. If people were using DNS servers that had some relationship to their network -- such as the one provided by their IPS -- then this wouldn't be an issue.

Plus not using Google's DNS gives you a little more privacy. Privacy of course being defined as not having every activity you do on the internet being logged by one of Google's many methods of invading your space (DNS, analytics, search, advertising, blogger, etc.)

Re:Alternately... (1)

mehrotra.akash (1539473) | more than 2 years ago | (#37256110)

When your ISP DNS randomly redirects you to a webpage advertising their products, or typing www.google.com in the address bar leads to an ISP page with ad's and a small link to www.google.com , 8.8.8.8 does save you a lot of hassle

Online advertisement is to blame for a slow web. (1)

Anonymous Coward | more than 2 years ago | (#37255972)

While this is a commendable effort, the biggest offender when it comes to horrible load times is the embedded advertising content. Most of these ad providers -- I'm looking at you Google -- deliver their content at a snails pace, delaying the delivery of the actual content.

Case in point, I used chrome's network developers tool to analyze load time for the various elements on http://www.slashdot.org

The top 5 longest durations go to *.doubleclick.net and range from 242 to 439ms, with the total page load time at a little under 4 seconds.

Happy to answer questions (2)

davidu (18) | more than 2 years ago | (#37256064)

Happy to answer questions about this work.

Re:Happy to answer questions (1)

glassware (195317) | more than 2 years ago | (#37257328)

What's the process to go about using this if I'm currently using round robin for say 10 servers? Do I need to switch to a DNS server that can obey your extra tag and select the correct closest IP?

Stupid workaround for stupid server code (1)

pslam (97660) | more than 2 years ago | (#37256080)

Messing with DNS is doing it the Wrong Way. All of these CDN services are based on HTTP. When you're using them, that's an HTTP server you're talking to. It's perfectly capable of geolocating you by IP, and it can either hand you back links to a local CDN, or redirect you to another server.

Why the hell must we mess with DNS to do this? This is a solution which only works if you use Google DNS, OpenDNS, or sometimes if you use your local ISP's DNS. What if you're just running bind for you local net vs the root servers? Bzzt. Doesn't work.

The most insane thing about this is it's Google we're talking about here. They damn well know how to implement this entirely in their servers without resorting to DNS hacks. Why are they promoting this net-neutrality breaking, layering violating botch? We need less people to use this, not more.

Akami? (0)

Anonymous Coward | more than 2 years ago | (#37256266)

Just to reuse the title from above, for more effect. I am guessing Akami owns the easy and obvious ideas in pretty tight patents. That is one of the reasons they have limited competitors.

Re:Stupid workaround for stupid server code (1)

KingMotley (944240) | more than 2 years ago | (#37256458)

Because it works, and there is more out there than just HTTP. This same approach will work for any protocol that uses DNS to resolve domain names. It also doesn't require a ton of server side hacks that need/should be implemented from all vendors. Easy fix, and quick to deploy requiring no end-user code/system changes. Seems like a no brainer to me.

Re:Stupid workaround for stupid server code (1)

pslam (97660) | more than 2 years ago | (#37256678)

Because it works, and there is more out there than just HTTP. This same approach will work for any protocol that uses DNS to resolve domain names.

Except that this is only used for HTTP. I do not know of any non-HTTP examples.

Re:Stupid workaround for stupid server code (0)

Anonymous Coward | more than 2 years ago | (#37257572)

FTP? Smtp? Distributed file shares? SQL server mirrors? NNTP? Ntp? I could probably list a dozen more.

Re:Stupid workaround for stupid server code (1)

pslam (97660) | more than 2 years ago | (#37257616)

FTP? Smtp? Distributed file shares? SQL server mirrors? NNTP? Ntp? I could probably list a dozen more.

Yes, that's a list of common internet protocols. I do not know of any example of anyone actually using geolocated DNS to CDN them. Do you have a concrete example?

Re:Stupid workaround for stupid server code (1)

slamb (119285) | more than 2 years ago | (#37258348)

Yes: Gmail (IMAP and SMTP), Google Chat (XMPP), etc. Try it out...do the DNS lookups yourself from different places.

Re:Stupid workaround for stupid server code (1)

slamb (119285) | more than 2 years ago | (#37256490)

All of these CDN services are based on HTTP. When you're using them, that's an HTTP server you're talking to. It's perfectly capable of geolocating you by IP, and it can either hand you back links to a local CDN, or redirect you to another server.

Then it's not possible to geolocate that first HTTP request.

What if you're just running bind for you local net vs the root servers? Bzzt. Doesn't work.

It should work, although it may not be necessary. I see six possibilities:

  • Your local bind is configured to send queries to the authoritative servers for the domain.
    • You're using this extension: it sends along the web browser's IP address so it gets back a geolocated response for that address.
    • You're not using this extension: it doesn't send the web browser's IP address so it gets back a geolocated response for its own address. The two addresses are likely the same (or nearly so) so this distinction is irrelevant.
  • Your local bind is configured to send queries on to some other recursive DNS server. (You take advantage of their cache to reduce your DNS latency.)
    • You're using this extension and the other server relays the extra data: you get a geolocated response for your web browser.
    • You're using this extension and the other server drops the extra data: you get a geolocated response for the other server.
    • You're not using this extension and the other server adds the extra data: you get a geolocated response for your DNS server.
    • You're not using this extension and the other server doesn't add the extra data: you get a geolocated response for the other server.

In all cases, the geolocated DNS response is at least as good as before, and in some it's an improvement, depending on how far the DNS servers are from you. If there's a latency degradation from this change, it'd be in the other recursive DNS server being slower to respond because the only cached responses it has are for different subnets. You can imagine techniques to mitigate that effect, though I haven't checked if the described work does so.

Full disclosure: I work for Google, but not on this.

Re:Stupid workaround for stupid server code (1)

anom (809433) | more than 2 years ago | (#37256514)

If you're running bind for your local net, then you don't need this as your DNS resolver is already located close to you. The problem arises when DNS resolvers are utilized that are not "close" to the clients they serve and therefore CDN's will often end up picking a CDN replica close to your resolver rather than close to you.

Obviously this problem grows as does the distance between you and your resolver -- if you're using a huge resolving service like Google DNS or OpenDNS, then you are much more likely to be far from your resolver. If you're using your ISP's resolver, then it could be just a few hops up the network path, or it could be across the country (as some ISP's will just use a "bank" or two of resolvers).

This stuff is done in DNS for a variety of reasons. If you use intelligence at the HTTP layer, you:

1. Obviously have a non-optimized initial server choice, as once you're communicating over HTTP you're already talking to a specific replica. This will likely apply for each and every new CDN-ized domain you use.

2. You require the client to add significant intelligence to their website in order to create all the internal links to point to a "good" server. Obviously, it's going to be harder to sell your services if the client has to rewrite a bunch of code and can't just repoint their main domain at your IPs.

3. And IMO most importantly, this removes the server selection choice from being under the sole control of the CDN provider. If this stuff is logic'd through the main HTTP page of the website, the CDN must expose its server selection strategy to the client, which is likely proprietary business knowledge. Furthermore, the server selection map is dynamic and rapidly changing based upon internet link congestion and server load, and obviously this data would have to be pushed to the client website as well. Also, if you're thinking you could just point the initial IP to a CDN-hosted HTTP server and issue HTTP redirects from there, then you've just eaten up two whole RTT's -- not a good way to speed up webpages.

Also, to those that say this aids censorship, I'd have to call BS. A country wishing to censor its own users can easily implement a "use our dns resolvers only" policy using a simple firewall rule and watch all the traffic/rewrite dns responses anyways.

Re:Stupid workaround for stupid server code (1)

pslam (97660) | more than 2 years ago | (#37256650)

And here we have the real reason why this is being promoted:

3. And IMO most importantly, this removes the server selection choice from being under the sole control of the CDN provider. If this stuff is logic'd through the main HTTP page of the website, the CDN must expose its server selection strategy to the client, which is likely proprietary business knowledge.

It breaks DNS. It certainly breaks my local DNS installation, for starters. It also means that *everyone* must use this DNS hack because service will be degraded unless you do.

Re:Stupid workaround for stupid server code (1)

anom (809433) | more than 2 years ago | (#37256784)

"It breaks DNS" seems like a pretty strong comment to me and I'm not following how exactly it's going to do this. If you have a local DNS installation (I assume you're talking about dns /resolvers/ here?) that local machines use, there is absolutely no need for you to implement this, as any CDN basing a server selection choice on your local DNS installation will be well-guided. Your resolver won't send the applicable EDNS option, and the authoritative DNS server won't care that it's not there -- it'll just base it's choice on the resolver's IP as has happened for years.

If you're running an authoritative DNS server, then you're not going to get the EDNS field from google/opendns because they're not going to send it to you, and if you did get it, it would only be a problem if you had a backwards DNS server that pukes on EDNS.

How is it breaking things for you?

Re:Stupid workaround for stupid server code (0)

Anonymous Coward | more than 2 years ago | (#37256544)

Yes. Can anyone explain why DNS tricks are better than HTTP redirects? Is this just for the cosmetic nicety of retaining the same host/URL in the browser address bar regardless of which physical server is sending the content?

Local DNS cache (1)

Wowsers (1151731) | more than 2 years ago | (#37256138)

Maybe it's about time the Windows operating system was given the ability to cache DNS queries locally BY DEFAULT. It would speed up searches for most used sites by the user, and take the load of the internet from running requests for sites the user keep on visiting.

It already does (1)

_0xd0ad (1974778) | more than 2 years ago | (#37256252)

If you went out of your way to Disable Client-Side DNS Caching [microsoft.com], that's hardly something you can blame Windows for...

ONLY PROBLEM IS, it "chokes" (0)

Anonymous Coward | more than 2 years ago | (#37258490)

w/ large HOSTS files (Windows LOCAL DNS clientside cache service).

YES - Windows LOCAL DNS CLIENTSIDE CACHE SERVICE "chokes" with larger HOSTS files & lags you. I noted this to MS' Senior mgt. in their "Windows Client Performance Division" in fact (Foredecker/Mr. Richard Russell who used to post here quite a lot in fact) here on this very website years ago:

http://slashdot.org/comments.pl?sid=1467692&cid=30384918 [slashdot.org]

Along with the fact that a SMALLER & FASTER/MORE EFFICIENT BLOCKING "IP ADDRESS" CAN STILL BE USED IN Windows 2000/XP/Server 2003!

(Where it could be used in VISTA (& onwards in Win7/Srv2k8) up until MS "Patch Tuesday" on 12/09/2008)

That FASTER/BETTER one, is 0 - Used as the blocking address (vs. the largest/slowest in the 127.0.0.1 loopback adapter address, OR the slightly better & just as compatible 0.0.0.0 blocking address (no loopback hit is incurred there either mind you)).

He never got back to me, though he said he would here, AND in email... I didn't like THAT very much!

Still It's nice to see that GOOGLE & OPENDNS are doing this though! (both of whom provide DNS services, OpenDNS also filters vs. phishing/spamming & possibly malware (Norton DNS does for sure on the latter though, so I use it alongside OpenDNS + ScrubIT DNS in my routers &/or IP stack settings)).

APK

P.S.=> HOWEVER/"Bottom-Line" here, is this: Nothing GOOGLE &/or OpenDNS can do can resolve IPAddress-to-HOSTS/DOMAIN names, as fast as locally hardcoded favorites in the HOSTS file either... that's not all I use it for, but mostly for blocking out 1,580,313++ KNOWN bad sites/servers/hosts-domains, botnet C&C servers, sites KNOWN to serve up malicious script content or malware-in-general (for security) AND, adbanners (For more bandwidth/speed that I pay for) which also lends not only to speed online, but also security (since adbanners HAVE been found to house maliciously scripted content as well)...

... apk

Re:Local DNS cache (1)

sgt scrub (869860) | more than 2 years ago | (#37256374)

Maybe it's about time the Windows operating system was given the ability to cache DNS queries locally BY DEFAULT and open Windows back up for DNS cache poisoning which was the reason is was disabled BY DEFAULT.

FTFY

How is this new again? (1)

Vrtigo1 (1303147) | more than 2 years ago | (#37256298)

So...the summary states "they've reworked their DNS servers so that they forward the first three octets of your IP address to the target web service". Uh, doesn't my browser send my WHOLE ip address to the web service when I make a HTTP request anyway? How is this different/better?

If what they meant to say is that the resolver sends the first three octets of my ip address to the destination's name servers when doing a recursive lookup, then how is this any better than using any old DNS? In other words, the big advantage to Google DNS for example is that it's free and fast. If their DNS server now has to ignore any cached records and do a recursive lookup for EVERY request, doesn't that negate the speed advantage? Obviously once someone in my /24 requests www.itunesdownload.akamai.net, then that specific IP should be cached for all requests from the same /24 for the TTL specified by the site operator, but for all other sites that will probably not have been hit recently, and thus not cached, I only see this adding more latency.

They should just do it in the web stack instead of trying to do it down in layer 4. User navigates to the download page, then the webserver has your full IP, geolocates you and redirects you to a download from a local server.

How I envy you... (1)

juancn (596002) | more than 2 years ago | (#37256310)

If you have a 10 or 20Mbps connection, and yet a download is crawling along at just a few hundred kilobytes

I would love to have a connection that "crawls along at just a few hundred kilobytes (a second)", most of the times, when it crawls, it does so at a few tens of kilobytes a second (sometimes even less than that).

I love the posted example (0)

Anonymous Coward | more than 2 years ago | (#37256342)

>(downloading software or drivers from a Taiwanese site is a good example)
not if you live in Taiwan

I saw the article title and it read.. (1, Insightful)

SuperCharlie (1068072) | more than 2 years ago | (#37256386)

Evil and OpenDNS Work On Global Internet Speedup.

I think from now on simply replacing the word Google with Evil should be an auto-correct feature.

Re:I saw the article title and it read.. (2)

Kozz (7764) | more than 2 years ago | (#37256882)

Evil and OpenDNS Work On Global Internet Speedup.

I think from now on simply replacing the word Google with Evil should be an auto-correct feature.

And who, exactly, is forcing you to use OpenDNS?

Latency vs. bandwidth (1)

aharth (412459) | more than 2 years ago | (#37256538)

There are two factors that affect the performance of web (HTTP) lookups: latency and bandwidth. Latency depends on the distance between client and server. You won't be able to send data faster than the speed of light. Bringing the data closer to the client helps to reduce latency, especially for small lookups. Bandwidth becomes the limiting factor when you transfer (large amounts of) data over under-dimensioned pipes. In general, I'd be a much more happy person if people would use HTTP caching headers (Expires and such) more often, as then a Squid proxy can bring substantial performance gains.

Not an Internet Speedup (1)

GameboyRMH (1153867) | more than 2 years ago | (#37256958)

An Internet speedup would involve adding the ability to carry more bytes per second, analogous to changing your delivery vehicles from donkey carts to vans. This is just improving the logistics of the donkey cart-based delivery service.

What we need are shorter web pages (2)

Animats (122034) | more than 2 years ago | (#37257128)

What we need is less junk in web pages. The amount of dreck in web pages has gotten completely out of control. The home page of Slashdot has 3424 lines of HTML, not counting included files. This is not unusual. I'm seeing pages over 4000 lines long for newspaper stories. Pages with three to five different tracking systems. Hidden content for popups that's sent even when the popup isn't being loaded. Loading of ten files of CSS and Javascript for a routine page.

CSS was supposed to make web pages smaller, but instead, it's made them much, much bigger. Being able to include CSS from common files was suppose to improve caching, but instead, many content management systems create unique files of CSS for each page.

And you get to pay for downloading all this junk to your smartphone. It doesn't matter what route it takes through the backbone; eventually all that junk crosses the pay-per-bit bandwidth-throttled air link to the phone.

Where bandwidth really matters is video. There, the video player already negotiates the stream setup. That's the place to handle choosing which source to use. Not DNS.

Looks OK, but what about anycast? (1)

fa2k (881632) | more than 2 years ago | (#37257410)

I was preparing to hate this (I don't trust Google that much & OpenDNS do some questionable things), but I can't find anything wrong with this from a privacy or openness perspective. I think there has already been huge DNS pools that return a random set of records, so the idea of DNS being universal and reproducible is gone, if that idea ever existed (and I see no reason why that would be a problem).

Isn't this what anycast is supposed to solve? For example when using 6to4, one can specify a single, globally known address, but it takes you to a local 6to4 gateway (if it happens to work at a given ISP..). Would anycast support commercial entities setting up a CDNs with a few anycast addresses, that route to the nearest server farm? I don't know if I prefer this, since it requires putting some "intelligence" in the routers. Going by the adoption rate of IPv6, this wouldn't be supported by ISP before 2030 anyway...

Sorry for researching this, but if anycast is better supported in IPv6, then that could be why the explanation page lacks any mention of IPv6. Sites would have to supply a few normal addresses in addition to the anycast address in DNS for redundancy, but browsers could try the anycast one first for speed. I think this would be a good solution, but I don't know if it's better than Google and OpenDNS's suggestion.

Re:Looks OK, but what about anycast? (0)

Anonymous Coward | more than 2 years ago | (#37257812)

Anycast isn't a good solution for TCP, since routing topology changes in the backbone can change which anycast endpoint your packets end up at, which breaks your TCP connection.

How would caching work? (1)

fa2k (881632) | more than 2 years ago | (#37257562)

(not related to my previous post:) How does this work with DNS caches? Say Ebay implemented this and gave back A records depending on the client IP address. If user A was using Google DNS in Norway and requested DNS records for Ebay, then Google could
  • a) Cache the reply and return those for all other users (B,C,D...). This would mean that all users of that Google DNS server, maybe over all of Europe, would get the A records for Ebay's Norway server (I don't think Ebay has a server in Norway, but never mind that)
  • b) Perform a new request to Ebay for each user of its DNS service, but this would slow down DNS requests!

So which one is it? Does google have a separate cache for each /16? I'm interested to know how they deal with this. I guess it's not a dealbreaker, but all options involve some trade-offs.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...