Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DNS Complexity

kdawson posted more than 7 years ago | from the loosely-specified dept.

The Internet 93

ChelleChelle writes "Paul Vixie of Internet Systems Consortium guides us on a journey into the sublime details of the domain name system. Although it contains just a few simple rules, DNS has grown into a system of enormous complexity. This article explores the supposed and true definitions of DNS, and shows some of the tension between the two definitions through the lens of the philosophy of Internet development protocol."

cancel ×

93 comments

God Smack Your Ass !! (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19317815)


God Smack Your Ass !!

Re:God Smack Your Ass !! (0)

Anonymous Coward | more than 7 years ago | (#19322587)

The only good DNS is Dead Nigger Storage [youtube.com]

If you liked this, you might also like.. (-1, Offtopic)

QuantumG (50515) | more than 7 years ago | (#19317825)

All About Mold: a quick look at our furry friends.

or

Stamp Collecting: it's not just for sexually represed teens and old men anymore.

Today's Wikipedia treat (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#19317877)

Conan Christopher O'Brien, 44, is the comedian and the host of the Tonight Show with Jay Leno. He is Scottish, as were his parents, as well as his two brothers and three siblings. He has no relation to CNN anchor Soledad O'Brien.

O'Brien, who is 43, is commonly thought by television audiences to be of diminutive stature, though some journalists and alternative biographers dispute this claim.

As of 2007, O'Brien has been confirmed dead of tuberculosis. Efforts are being made to establishify a Pokémon character in his honour. He was 45.

Taking a risk (5, Insightful)

Anonymous Coward | more than 7 years ago | (#19317863)

I'm going to risk sounding like an idiot and say that I think it's inhuman that somebody could write an article explaining how DNS works without having at least one diagram in it. I mean, c'mon, I can wade through piles of opaque text with the best of them, but just throw me a bone here, alright?

That's the IETF Way (4, Informative)

Kadin2048 (468275) | more than 7 years ago | (#19317993)

Well, it was written by Paul Vixie, better known for writing a whole bunch of RFCs ... they're not known for being exactly graphics-heavy, either.

(Although some of them do have some neat ASCII art.)

Re:Taking a risk (1)

MikeBabcock (65886) | more than 7 years ago | (#19321737)


He's the guy that inadequately describes complex systems then gets into arguments about how they ought to be implemented because his implementations and his protocol descriptions don't always overlap.

That said, he's a fairly important player in the Internet world, despite needing a bit of a rethink on how he writes RFCs.

Re: We owe him a lot (1)

Douglas Goodall (992917) | more than 7 years ago | (#19334141)

While the DNS definition is defined in RFCs. Paul's BIND implementation, as well as various commercially hardened versions he has worked on over the years are part of the glue we depend on to hold the Internet together. His code runs in more Unix/Linux systems than you can imagine today. I am a Real Vixie fanboi. The DNS paradigm sits on top of the address resolution paradigm and has added the flexibility we have need to grow the Internet over the last several decades. My hat is off to Paul.

Okay, I'll say it: Paul Vixie is a terrible writer (1)

Futurepower(R) (558542) | more than 7 years ago | (#19361617)

Judging from that one article, Paul Vixie is a terrible writer. His thoughts are very unorganized. And, like many people with little understanding of how to write well, he is obviously not aware of the need to have an editor.

James Michener, the famous writer (South Pacific), was very intense about having his writing edited before he would present it to readers. With one of his books he said he and an editor read every word 5 times together.

This is NOT a comment about his achievements and contributions to the internet we all know and love. It is only a comment about his ability to express himself in writing.

The inability to communicate well limits achievments and recognition for achievements.

Re:Okay, I'll say it: Paul Vixie is a terrible wri (1)

svanstrom (734343) | more than 7 years ago | (#19378445)

Writing style is often a habit formed by the understanding, or lack thereof, by the ones you are writing for/to; when questioning a persons style you must ask yourself a lot of quite hard (to answer) questions regarding both yourself, your understanding of the text/style and also the writer.

So, is he really "a terrible writer" limiting himself by his "inability to communicate", or was it just a case of an article written in a style not prefered/suitable for you (and/or the general reader of where the article's been made available)?

Wow. A real slashdot story (5, Funny)

m0nkyman (7101) | more than 7 years ago | (#19317919)

Been a while since I've seen one of these.

Re:Wow. A real slashdot story (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#19317937)

s/slashdot/boring/

Re:Wow. A real slashdot story (0)

Anonymous Coward | more than 7 years ago | (#19318393)

Old geezers like us keep whining about ./ turning into a gosip show, but when they actually post a story that's a "news for nerds, stuff that matters," see how many comments it get.

Re:Wow. A real slashdot story (0)

Anonymous Coward | more than 7 years ago | (#19320629)

I thought this was a "real" geek story... Strangely enough its probably only us 20-30 year olds that are at all interested in this crap.

http://slashdot.org/article.pl?sid=07/05/30/000259 &from=rss [slashdot.org]

Re:Wow. A real slashdot story (1)

m0nkyman (7101) | more than 7 years ago | (#19321375)

That story was posted after this one. I'm not quite nerdy enough to have invented a time machine to read slashdot stories... :P

Re:Wow. A real slashdot story (0, Troll)

ThreeSpace (1108453) | more than 7 years ago | (#19318959)

Perhaps I'm a snob, but I hardly consider this a real slashdot story. It's an overview of DNS, which any computer geek worth his/her salt should already be quite familiar with.

Re:Wow. A real slashdot story (1, Insightful)

Anonymous Coward | more than 7 years ago | (#19319307)

Maybe any networking geek worth his salt should know this. But any computer geek? I disagree.

As a numerical modelling and computer graphics geek I have to say that I know very little about DNS & network architectures in general, and that I learned something today.

Re:Wow. A real slashdot story (1)

pairo (519657) | more than 7 years ago | (#19319783)

Actually, even though I knew all that, I like it. Only... It's not really news. :-)

DNS DNS DNS DNS (3, Insightful)

mcrbids (148650) | more than 7 years ago | (#19317929)

While technically well written and clear, this is one of the most uninspiring pieces of work imaginable describing the values of DNS. It's so bad that I'd rather gouge my eyes out with a spoon. Highly technical and detailed while still being abstract, it's 100% accurate while still managing to be utterly devoid of any usefulness whatsoever.

Oh yeah, this is DNS we're talking about. Implementing it IS uninspiring and so abstract, it does make you rather gouge your eyes out with a rusty spoon.

But what DNS does is extremely exciting, and forms the foundation of what makes the Internet actually WORK for people. Think about it - when's the last time there was any major DNS failure? Never? Me too. Damned reliable, damned powerful, and damned easy to get you hooked up to the geek blogs, tunes, IRC, and whatever else we all crave.

Read this if:

A) You work with DNS regularly and want to know if you know enough for it to make some sense to you. (That's me)

B) You are thinking about implementing a DNS server.

Otherwise, move along, find something that might interest you, but take just a moment to reflect how difficult Internet life would be if DNS wasn't so well designed and crafted.

Here's the Cliffs' Notes version (5, Interesting)

Kadin2048 (468275) | more than 7 years ago | (#19318409)

Basically, Vixie's point in the whole article really isn't to rehash how DNS works (although he does basically do that), but to make a rather interesting point about complex systems.

His point is that large systems can become unimaginably complex, even when they begin with a very simple set of rules. Particularly when those rules are vague.

Although he doesn't say it explicitly, I think there are probably some similarities between neutral networks and DNS -- both begin with very simple rules, and then the complexity comes out of the sheer number of connections when you scale it up. Likewise, with DNS, you can have a very simple implementation (say, for a home office) that's quite easy to understand and use. Everything makes sense. It's basically understandable. But then, take that same protocol, even some of the same software, and scale it up to a few billion nodes or whatever DNS has these days, and suddenly the whole thing is so complex, nobody can even begin to really understand it in its entirety. You can't even predict, exactly, how it's going to react to any change -- it's very much like a complex organic system at that point. You can perform experiments on it, and make hypotheses, but even though it's an entirely deterministic system (or ought to be), it acts mysteriously.

Re:Here's the Cliffs' Notes version (0)

Anonymous Coward | more than 7 years ago | (#19318537)

A neural net - one day, when the DNS spans multiple star systems - will it become sentient?

Re:Here's the Cliffs' Notes version (2, Interesting)

apathy maybe (922212) | more than 7 years ago | (#19318763)

So, sort of like The Game of Life then? I've made the point before (not here though), that The Game of Life is a good example of simple rules creating a (potentially, very) complex situation. I then used it to make the analogy with the universe we live in.

Take gliders for example, you could observe them, and work out how they work, but each type of glider works differently. As such, you would have a different set of rules for each. Unless you noticed that the same (fourish) rules worked all the time. And if you make the system much, much bigger (really have an "infinite" game board), then your constructs get much bigger, and it becomes much harder to see the simple rules under lying the whole thing.

Re:Here's the Cliffs' Notes version (1)

amorsen (7485) | more than 7 years ago | (#19319291)

I've made the point before (not here though), that The Game of Life is a good example of simple rules creating a (potentially, very) complex situation.

Game of Life is Turing Complete. (Proved the boring way, by simply implementing logic gates.)

BECAUSE of simple rules (4, Insightful)

CarpetShark (865376) | more than 7 years ago | (#19319805)

His point is that large systems can become unimaginably complex, even when they begin with a very simple set of rules. Particularly when those rules are vague.


It might be more accurate to say that systems can become unimaginably complex BECAUSE they have simple rules. The more rules, the more limitations.

Re:DNS DNS DNS DNS (4, Insightful)

isaac (2852) | more than 7 years ago | (#19318503)

Read this if:

A) You work with DNS regularly and want to know if you know enough for it to make some sense to you. (That's me)

B) You are thinking about implementing a DNS server.

Otherwise, move along, find something that might interest you, but take just a moment to reflect how difficult Internet life would be if DNS wasn't so well designed and crafted.


I admire Paul Vixie a real whole lot (from afar; when the day comes that I have something interesting to say to him directly I'll be sure to mention it but until then, I'm sure he gets enough email.) That said, this article isn't really interesting to someone who really does work intensively with DNS implementations, and for whom intermediate caching nameserver and client resolver behaviour on the wild-and-wooly internet is a matter of near-daily concern.

It's actually rather depressing insofar as it only confirms what those of us in this position have come to discover: that a system loosely defined has become an ecosystem incapable of complete definition. FTA: "Most of it is not written down anywhere, and some of it would still be considered arguable if you got two or three DNS implementers in a room to talk about it." Ain't that the truth.

No, this article should be read by smart technical users and managers who don't have much experience with DNS and who intuitively believe that the way DNS works in the real world is well-defined and handed down on high on stone tablets from some standards-making body - the sort of well-meaning people who haven't yet realized what "RFC" stands for, if you will. For these people, this article could be a useful eye-opener.

-Isaac

Re:DNS DNS DNS DNS (2, Insightful)

isaac (2852) | more than 7 years ago | (#19318573)

To reply to myself...

It's actually rather depressing insofar as it only confirms what those of us in this position have come to discover: that a system loosely defined has become an ecosystem incapable of complete definition.


"Depressing" is the wrong word here - though it can certainly be frustrating to continually confront problems that wouldn't be problems if DNS weren't such a losely-defined protocol. When the scales truly fall from one's eyes, though, one realizes that it's not coincidental that the widely-adopted protocols of the internet are all simple and, mostly, loosely defined and easy to implement. Natural selection, of a sort, has led to the success of DNS (and TCP/IP, and HTTP, et cetera). Maybe a major change in the ecosystem will cause it to disappear (or be challenged in its niche) because it's simply not flexible enough to respond.

More probably, DNS is sufficiently simple and ubiquitous that it will continue to evolve as necessary in mostly minor ways while remaining as essentially recognizable to we dinosaurs of the internet era as the cockroach would be to the dinosaurs of the dinosaur era.

-Isaac

Re:DNS DNS DNS DNS (1)

rthille (8526) | more than 7 years ago | (#19452081)

Reminds me of the essay about C vs. Lisp and the Berkley UNIX hackers vs. the harvard OS implementors...

but I'm too lazy to find a link.

Re:DNS DNS DNS DNS (1)

Richard W.M. Jones (591125) | more than 7 years ago | (#19319117)

Think about it - when's the last time there was any major DNS failure? Never? Me too.

I remember when Network Solutions forgot to load about half the .com zone file one night [interesting-people.org] , and most of the internet disappeared. That was only 10 years ago.

Rich.

Re:DNS DNS DNS DNS (2, Interesting)

pairo (519657) | more than 7 years ago | (#19319797)

For me, it happens every other month or so, with the .ro registrar screwing things up on a regular basis. Last time, everything newer than 2002ish went AWOL for a almost a full day.

Weakness of DNS (2, Interesting)

Anonymous Coward | more than 7 years ago | (#19319295)

I'd rather say that DNS is damned weak. It's probably the weakest point in the Internet infrastructure as a whole, and that's a lot to say. DNS was chosen by SANS Institute as one of the top 20 Internet vulnerabilities in 2006:

http://www.sans.org/top20/ [sans.org]

Last time there was a major DNS failure? The DNS system relies on 13 servers. In 2002 nine of them went down due to a DDoS attack, the whole Internet was very slow or unreachable for an hour. This year in February almost three of the servers crashed due to another DDoS, which moved the Department of Defense to say that next time they will counterattack and even bomb the source of the DDoS, so guess if it was important.

By the way, remember that Paul Vixie's BIND is just one implementation and it's considered to be flawed by some wise people:

http://cr.yp.to/djbdns/blurb/unbind.html [cr.yp.to]

Re:Weakness of DNS (1)

ConceptJunkie (24823) | more than 7 years ago | (#19319729)

This year in February almost three of the servers crashed due to another DDoS

"Almost three"? Would that be "two"? Just funnin' ya, I assume you meant "almost crashed."

DoD thinking about bombing the source of the DDoS is pretty crazy. I mean, what if it's a script kiddie in my neightborhood? ;-)

Seriously though, I'm sure that was just "tough talk". I can't imagine any scenario where military action wouldn't backfire in an unprecedented way, and that's even given the context that this administration has rewritten the book on military action backfiring.

Re:Weakness of DNS (1)

morcego (260031) | more than 7 years ago | (#19322855)

By the way, remember that Paul Vixie's BIND is just one implementation and it's considered to be flawed by some wise people:

http://cr.yp.to/djbdns/blurb/unbind.html [cr.yp.to]


Just because he is wise doesn't mean he doesn't have an agenda of his own (he does).

I get fairly pissed at DJB when he says bind is flawed, and parades his djbdns in front of us, without mentioning that djbdns only implements a tiny part of what bind does. Well, guess what ? dnsdns is not an option for 90% of the DNS servers (at least). His solution for complex problems that are prone to have security implications is just not implementing them. Well, doh!

And yes, Paul Vixie is know for providing bugging, security flawed software. I particually remember some serious issues with his crond daemon. I just don't think DJB is a good judge regarding any software he providing an "alternative".

Re:Weakness of DNS (1)

johnwarburton (918763) | more than 7 years ago | (#19331973)

The DNS system relies on 13 servers

Ahhh, maybe not. It would be better to say the "root servers" are made up of quite a number of servers implementing some level of high availability which usually requires more than one server.

For example, the F root server [isc.org] , operated by Mr Vixie's ISC, is 40 distributed servers around the world accessed by a Hierarchical Anycast technique

John

Re:DNS DNS DNS DNS (1)

bestalexguy (959961) | more than 7 years ago | (#19323799)

While technically well written and clear

Well, is it? From the article:

"The DNS namespace has a tree structure, where every node has a parent except the root node, which is its own parent."

This isn't correct. The root node is no exception and DOES have a parent, which is named in the last clause of the quoted sentence.
Sure, this sounds overly meticulous, as any good formal definition should be. Just rewrite:

"The DNS namespace has a tree structure, where every node has a parent, with the root node being its own parent."

Public DNS is corrupt, but Private DNS is sublime. (4, Interesting)

Zombie Ryushu (803103) | more than 7 years ago | (#19318057)

The Public DNS System has become corrupted. It used to be edu, com, org, net, and country codes.

Then the bribes started, now we have .info, .tv, and god knows what else.

Internally, I use DNS and I would never replace it. Just secure it. All my Internal Updates for my home DNS System work like this. Using the LDAPDNS system, my reverse lookup zones become distinguished containers, like

relativeDomainName=1+zoneName=0.168.192.in-addr.ar pa,dc=0,dc=168,dc=192,dc=in-addr,dc=arpa

(I'm the guy who wrote this.)

http://slashdot.org/comments.pl?sid=235321&cid=191 90073 [slashdot.org]

That. My zone updates are then wrapped up in SSL and replicated to my other Domain Controller. I would suggest that DNS return to its roots, restore the old Domain hierarchy and discontinue all these other TLDs, but they won't. There is too much money to be illegitimately made off the corruption of DNS.

Re:Public DNS is corrupt, but Private DNS is subli (1, Funny)

Anonymous Coward | more than 7 years ago | (#19318109)

You lost me at "distinguished containers."

Re:Public DNS is corrupt, but Private DNS is subli (2, Informative)

Rob_Warwick (789939) | more than 7 years ago | (#19318131)

tv is the country code for Tuvalu.

Re:Public DNS is corrupt, but Private DNS is subli (2, Insightful)

Zombie Ryushu (803103) | more than 7 years ago | (#19318303)

Oh... well my point is still valid. DNS Should not be a tool for politicians.

Re:Public DNS is corrupt, but Private DNS is subli (3, Informative)

bursch-X (458146) | more than 7 years ago | (#19319227)

Tuvalu's main motivation for selling .tv domains was to get the money together to become a member of the UNO so they can officially get a voice to be heard concerning their country (their islands) basically sinking into the ocean due to global warming and rising sea levels.

So sometimes politics and DNS might be for a good cause.

Re:Public DNS is corrupt, but Private DNS is subli (1)

bentcd (690786) | more than 7 years ago | (#19319347)

DNS Should not be a tool for politicians.

So you're basically saying there shouldn't be country codes?

Re:Public DNS is corrupt, but Private DNS is subli (2, Interesting)

grasshoppa (657393) | more than 7 years ago | (#19318369)

I have a better idea: Let's open the process for making up a new TLD to everyone. A minor cost associated with the administrative overhead of setting up a new TLD, and that's it. True, we cheapen existing TLDs considerably, but then they're artificially overpriced anyway.

It's not like it's a technical issue. The DNS system doesn't care how many TLDs there are, it's irrelevant to the immediate search.

Re:Public DNS is corrupt, but Private DNS is subli (1)

Tony Hoyle (11698) | more than 7 years ago | (#19319179)

It is technical actually - the TLD server has to respond all of the time, every time, even when millions of people want it... caching reduces the load but doesn't eliminate it by any means.

If a domain goes down it affects one company. If a TLD goes down it affects thousands, perhaps millions (if .com failed for example).

You could argue that one server == one TLD is a bad model and I wouldn't disagree.. there's no reason for one of the TLD companies to start a couple of hundred of the things - but then can you imagine what the likes of Verisign would do to that? Would it get cheaper? Hell no. They'd charge through the nose for it.

Re:Public DNS is corrupt, but Private DNS is subli (1)

jesboat (64736) | more than 7 years ago | (#19319533)

You... don't really sound like you know what you're talking about. (Sorry to be blunt.)

One TLD != one server; on the contrary, TLDs tend to have many, many servers.

The likes of Verisign, for example, run no less than 13 servers (a.gtld-servers.net through m.gtld-servers.net) for com and net, and, in reality, they almost certainly run many more, since each of those names is probably a cluster of actual machines.

The GTLD is managed similarly, and I'd be surprised if any other TLDs have less than 6 obviously distinct servers.

Even second-level domains are often redundant; many (all?) registars in com/net/org require 2-3 nameservers per domain.

Re:Public DNS is corrupt, but Private DNS is subli (0)

Anonymous Coward | more than 7 years ago | (#19329353)

.org is run by 1 linux machine (a 486/66 at that!) using djbdns and postgresql.

Re:Public DNS is corrupt, but Private DNS is subli (2, Insightful)

grasshoppa (657393) | more than 7 years ago | (#19325139)

As has already been pointed out, you can have a single TLD spread across several servers. You can also have multiple TLDs on a single server. More likely, you end up with a combination of these things: Multiple TLDs on a geographically disperse cluster of systems.

Re:Public DNS is corrupt, but Private DNS is subli (2, Interesting)

bvankuik (203077) | more than 7 years ago | (#19318475)

Who cares? Is something technically not right about the new TLDs? Or are you afraid someone else is making money off of it?

Re:Public DNS is corrupt, but Private DNS is subli (3, Interesting)

Professor_UNIX (867045) | more than 7 years ago | (#19319327)

Eliminate the domain squatters and you'll eliminate the push for alternative TLDs. I'm sure more than half the domain names in existence are typo-squatting domain hoarders. There's no legitimate reason we need to allow them to keep those domains. Get a posse together of people with a clue and start going through domains. When you come across one that is obviously a domain squatter, delete it and then put more emphasis on analyzing that guy's other domains and delete those if necessary too until you've cleaned up the system. It's not property, you're just leasing a label from the collective community and we can choose to take it back if you're being an asshat.

Re:Public DNS is corrupt, but Private DNS is subli (3, Insightful)

MT628496 (959515) | more than 7 years ago | (#19319641)

The problem is that depending on who does these reviews, there will be entirely different results. I don't think that we can legally take the names back, anyway. It sure would be nice though if the /. community got to decide on it. Actually, that would be terrible. We'd spend the whole time fighting amongst ourselves.

Re:Public DNS is corrupt, but Private DNS is subli (1)

Mattintosh (758112) | more than 7 years ago | (#19322045)

I don't think that we can legally take the names back, anyway.

I'm pretty sure that ICANN. All puns aside, think about what that acronym means. Internet Corporation for Assigned Names and Numbers. They get to assign the names and numbers, and therefore they also have the authority to un-assign those names and numbers. ICANN giveth, ICANN taketh away. Ugh. That one wasn't intended. I'll stop now, but hopefully you get my point.

Re:Public DNS is corrupt, but Private DNS is subli (0)

Anonymous Coward | more than 7 years ago | (#19320115)

Not just the ones you listed, but also .mil and .gov too. .tv IS a country code (ccTLD) - for Tuvalu (http://en.wikipedia.org/wiki/Tuvalu).

Re:Public DNS is corrupt, but Private DNS is subli (1)

Dogtanian (588974) | more than 7 years ago | (#19321431)

The Public DNS System has become corrupted. It used to be edu, com, org, net, and country codes. Then the bribes started, now we have .info, .tv, and god knows what else. Internally, I use DNS and I would never replace it. Just secure it.
Problem here is that you're mixing two- if not three- distinct issues; the DNS specification (and its implementations) and the choice of top-level domains. That the latter may be badly-chosen is not an inherent flaw of DNS itself. DNS may have flaws, but the poor choice of names is not its fault, any more than poor-quality TV programming reflects a problem with the chosen transmission system or the TV sets.

Re:Public DNS is corrupt, but Private DNS is subli (3, Funny)

mcrbids (148650) | more than 7 years ago | (#19322757)


Internally, I use DNS and I would never replace it. Just secure it. All my Internal Updates for my home DNS System work like this. Using the LDAPDNS system, my reverse lookup zones become distinguished containers, like

relativeDomainName=1+zoneName=0.168.192.in-addr.ar pa,dc=0,dc=168,dc=192,dc=in-addr,dc=arpa


You set this up for your freakin' home network!?!?!? Brother, there's this wild and wonderful thing out there called the world and you really, REALLY need to get a taste of it!

Some of the highlights that you'd do well to consider:

First, there's the Woman [google.com] . Life with a good woman is a life with greater extremes. Good moments are way better, bad moments are way worse.

Another good thing to try while roaming the wild, real world: Beer! This can be a good way to land a woman, if only for a night. [google.com]

Put the two together under the right circumstances, and you just might be able to experience perhaps the greatest pleasure of them all: SEX! Many would argue that this is the point of having a woman. [google.com] I'd argue instead that basic physiology has the point belonging to the man, but I digress...

Seriously, implementing an LDAP backend to DNS for a home network is about like using a jet engine for a ceiling fan. I'd love to know all the details of your implementation, since it would likely make a good candidate for submission to another good website. [thedailywtf.com]

Lastly, to do "secure" DNS updates is pretty simple. I keep the DNS zone files on my laptop. All my DNS nameservers are configured identically, as master servers. I use a script to SCP the files to the nameservers when I do a DNS update. Stupid simple, excellent security a la SSH.

Re:Public DNS is corrupt, but Private DNS is subli (0)

Anonymous Coward | more than 7 years ago | (#19366143)

> Life with a good woman is a life with greater extremes. Good moments are way better, bad moments are way worse.
Yeah, the crying fits when she treats me like crap are way worse that my other bad moments, but masturbating is still the same :(

> Another good thing to try [...]: Beer!
The women I'm around when I'm drinking beer typically just point at me and laugh >:(

> And you just might be able to experience perhaps the greatest pleasure of them all: SEX!
If by whiskey^Wsex, you mean two of the first three images, then I'm not that impressed. I've watched elephants in the zoo, and the woman I alluded to previously has parents who have several cats living in their house. Some of them fornicated when I was sitting next to them. It's not what it's made out to be >>:(

And while I'm posting anonymously, I might as well go whole hog: I guess I should get some relationship advice from the slashcrowd! >>>:(

Re:Public DNS is corrupt, but Private DNS is subli (0)

Anonymous Coward | more than 7 years ago | (#19334319)

I just read the post you linked to, and that sounds pretty much like home network heaven. Do you have any good links for starting points to setting up DNS, and the authentication the way you have?

Dynamic DNS (3, Interesting)

iminplaya (723125) | more than 7 years ago | (#19318167)

If more ISPs provided this, would it make traffic unbearable? How many dynamic domain name servers could we tolerate? Could we finally make the registrar problem go away?

Re:Dynamic DNS (1, Informative)

Tony Hoyle (11698) | more than 7 years ago | (#19319189)

It quite possibly would - dynamic dns domains have expiry times in the minute or even second range, making caching them impractical. A true domain expires in 24-36 hours.

TBH I'd rather dynamic dns went away.. it's a hack from the dialup days when people got dynamic IP addresses. Everyone's 'always on' now so dynamic IP is pointless.

Re:Dynamic DNS (3, Informative)

NickFitz (5849) | more than 7 years ago | (#19319839)

Everyone's 'always on' now so dynamic IP is pointless.

I don't know who this "everyone" is of whom you speak, but I'm with one of the biggest ISPs in the UK and they use DHCP; I have no guarantee of what my IP address will be from one day to the next. I could probably pay extra for a static IP, but it's not worth the money.

If you mean "everybody leaves their home network running at all times so they never lose the IP address they got via DHCP when they first turned their cable modem on", then you're ignoring the effect of network outages, power failure, and the fact that if I'm going away I turn all electrical kit off, as I don't want an electrical fire destroying my home and endangering the lives and property of the other people who live in this building.

There aren't enough IPv4 addresses for every Internet user; I reckon that, in term of individual users, it's a small minority worldwide who have a static IP address.

Re:Dynamic DNS (1)

bill_mcgonigle (4333) | more than 7 years ago | (#19325373)

If you mean "everybody leaves their home network running at all times so they never lose the IP address they got via DHCP when they first turned their cable modem on", then you're ignoring the effect of network outages, power failure, and the fact that if I'm going away I turn all electrical kit off

Or that in crowded Verizon territory you're going to get a 15 minute lease and your next IP won't be the same. Ugh, yes, I switched my folks to a cable modem because of it.

as I don't want an electrical fire destroying my home and endangering the lives and property of the other people who live in this building.

Oh, don't worry about that - your receptacles are rated for a number of insertions too - just as you unplug one of your items to make sure it doesn't catch fire is the time that the spring metal finally weakens enough so that once you're halfway to the airport it finally develops a stress fracture and shorts out the wiring.

Re:Dynamic DNS (1)

Phantom Gremlin (161961) | more than 7 years ago | (#19331705)

Oh, don't worry about that - your receptacles are rated for a number of insertions too - just as you unplug one of your items to make sure it doesn't catch fire is the time that the spring metal finally weakens enough so that once you're halfway to the airport it finally develops a stress fracture and shorts out the wiring.

About 25 years ago in the UK, it was common for wall outlets to have a switch built in to them. Don't know if they still do that.

Also, TV networks (like the BBC) would sign off (at perhaps 9:30 PM) with something like "please switch off your TV and, if at all possible, unplug it from the wall".

So the UK has a long history of encouraging that sort of behavior. Or should I say behaviour?

Re:Dynamic DNS (1)

@madeus (24818) | more than 7 years ago | (#19333043)

About 25 years ago in the UK, it was common for wall outlets to have a switch built in to them. Don't know if they still do that.
Yep, we still have that. Over-engineered monstrosities, solid 3 (rectangular, not round) pins and an on/off switch at every socket (but not typically on 4/8/12 bar extensions you buy at retail - some do, some don't) - with the exception of bathrooms which may only have the universal 12 volt 2 pin adapter in them.

Earth on all of them, except certain devices - the exact criteria for which I'm sure of (typically applies to lightweight things like xmas tree decoration lights), they often just have a plastic pin where earth would be - AFAIK this is required to push back some gubbins inside the socket - without which you won't get any power I gather (i.e. even if you are retarded enough to stick a fork in a socket AND turn it on it won't kill you, although I've never confirmed that).

They look like this [semanticweb.org] (that one is apparently Irish, same thing though). You can find them on trains too, switches and all (so you can plug your gadget stuff in while you travel).

It's all very safety conscious, and there is quite strict legislation governing the fitting of sockets and rings in a house, and power cabling and the selling of goods with plugs (i.e. if they must be molded plugs or not, if a device is required to be earthed, etc etc.).

Personally I love the plugs and sockets here. You always know when they are in, and they don't dick you around by falling out unexpectedly and you don't have to unplug something to turn it off. The downside of the design is they are more bulky, and that is a nuisance with portable devices (e.g. if you just want to take a small laptop or PDA wall socket recharger the damn plugs don't fold down so always stick out irritatingly if you have a slim line case/bag).

I am always worried by using US sockets, I've seen them spark, cables almost always wiggle and I can't help but wonder how many electrical accidents there are as a result of what seems a flimsy design. I've less experience of european sockets, which seem similar but not quite as flimsy. I don't know if the quality of US sockets varies.

Plugs > DNS

Re:Dynamic DNS (1)

iminplaya (723125) | more than 7 years ago | (#19321811)

Everyone's 'always on' now so dynamic IP is pointless.

I got a system that's always on, but the IP does change everyday. Don't know why while it's on. At home I shut down when out of the house(don't like leaving the door open), so there it always changes.

Re:Dynamic DNS (0)

Anonymous Coward | more than 7 years ago | (#19330137)

At home I shut down when out of the house(don't like leaving the door open), so there it always changes.

But according to you, humans have the right to free passage and property laws shouldn't exist... You have no right to withhold your internet access or house from me. I'm still waiting for your address hypocrite.

Re:Dynamic DNS (1)

iminplaya (723125) | more than 7 years ago | (#19330543)

I'm still waiting for your address...

Maybe tomorrow...

Re:Dynamic DNS (0)

Anonymous Coward | more than 7 years ago | (#19335165)

'm still waiting for your address... Maybe tomorrow...
can I have your address now? You're the one who said that nobody has the right to restrict the freedom of movement of people.

Re:Dynamic DNS (1)

iminplaya (723125) | more than 7 years ago | (#19337997)

I'll think about it.

Re:Dynamic DNS (1)

Mattintosh (758112) | more than 7 years ago | (#19322103)

My DSL service has a dynamic IP. The lease length is about a week. Regardless of whether I leave it on or not, it gets a new IP at least once a week.

moving hosts blows (4, Interesting)

weighn (578357) | more than 7 years ago | (#19318177)

my website is in an internet backwater [wikipedia.org] and you wouldn't believe the crap we went through when our hosting provider changed the IP address of the server. We were given a weeks' notice of the new IP and the knobs at ozemail or uunet or iinet or whatever the fsck they are called for the moment still had us hanging for TWO DAYS after the address was changed (it wasn't due to dns caching - that added another 24-48 hours according to some lookups).

I eventually got onto their 'support' crew in Singapore who assured that their engineers were looking into it. I don't know how much looking you need to do to change a single entry on a DNS table from "nnn.nnn.nnn.42" to "nnn.nnn.nnn.38".

Oh and here's a single page [acmqueue.com] version of TFA.

Re:moving hosts blows (0)

Anonymous Coward | more than 7 years ago | (#19318707)

-1, Offtopic.

Re:moving hosts blows (5, Interesting)

totally bogus dude (1040246) | more than 7 years ago | (#19319181)

Not sure exactly what your rant was about, but it just sounds like you had crappy support from ISP staff. Not really news, that. There's nothing about the DNS down under that makes it inherently slow. We moved our site recently to a different IP (different ISP, in fact), but we host our own DNS so we had control of the process. I reduced the TTL on the record a few days beforehand, and then really reduced it shortly before we launched the new site, and voila -- the updated record was visible to everyone pretty much instantly. (Except for people who configure their DNS proxies to ignore/override TTL values, but that's their problem.)

Obviously, relying on third parties to do the right thing by you is a crapshoot at the best of times. Not everyone has the luxury of hosting things themselves, though.

Re:moving hosts blows (1)

amorsen (7485) | more than 7 years ago | (#19319341)

(Except for people who configure their DNS proxies to ignore/override TTL values, but that's their problem.)

Their problem, and a problem for all their customers. Some of the largest ISP's do it.

Re:moving hosts blows (1)

weighn (578357) | more than 7 years ago | (#19319541)

Not sure exactly what your rant was about, but it just sounds like you had crappy support from ISP staff.
meh, half the time I don't even know - but I'd mod you informative if I could. The points you made re. TTL will come in handy next time this comes up. thanks !

Re:moving hosts blows (1)

MikeBabcock (65886) | more than 7 years ago | (#19321643)

tinydns [cr.yp.to] has a nice option to make one address expire at a specific time and another take its place (or just start serving a record at a specific time, or have a record stop working at a specific time). When it gets close to the expiry time, the existing record's TTL is reduced accordingly to prevent problems as well -- its quite a nice feature really for IP changes.

Article wrong about Unicode? (2, Insightful)

amorsen (7485) | more than 7 years ago | (#19319241)

From the article: "To express multilingual symbol sets usually means Unicode, whose binary representation is not directly compatible with the upper/lowercase "folding" required for DNS labels."

UTF-8 should be perfectly compatible with the case folding. The character which get folded are in the US-ASCII subset of UTF-8 and therefore have their high bit unset. All multibyte-characters in UTF-8 have the high bit set in each byte, so they aren't subject to that case folding. The DNS standard is, as far as I know, completely UTF-8-compatible except in the places where it explicitly says that "only these particular characters are allowed here".

Re:Article wrong about Unicode? (0)

Anonymous Coward | more than 7 years ago | (#19322431)

Fuck unicode, it should be called coonicode.

Optimising DNS lookup time (1)

totally bogus dude (1040246) | more than 7 years ago | (#19319249)

I've wondered about this for a while, but like any good slashdotter I haven't actually researched it myself, but figured I'd just post it here and see what sort of replies I get.

Where I work, we host a small number of websites, each usually with two or three domain names (one primary name, and the others redirected to that). These are in a variety of domains; the usual TLD's and a few country codes and special domains.

I've set them up so that each major domain has its own set of name servers; i.e. we have host servers defined under a .com name, which all of our .com domains use. .net has its own set, .com.au has its own, and so forth. These all point to the same IP addresses, they're just defined in different domains (i.e. a.ns.ourdomain.com is the same IP as a.ns.ourdomain.com.au).

My reason for doing this is to try to minimize the number of lookups needed. A lookup for "www.example.com" gets a reply saying "the nameservers are a.ns.ourdomain.com and b.ns.ourdomain.com, and their IP addresses are w.x.y.z and a.b.c.d". The resolver can then go straight to our name servers, rather than doing an extra one for ourdomain.com.

This is assuming the resolver is smart enough to realise it can trust the additional records with the IP addresses of [ab].ns.ourdomain.com, since they're coming from a server which is authoritative for .com, anyway.

While this is fine in theory, I don't know (and haven't tested) whether popular DNS servers actually do manage to make quicker lookups using this strategy. If not, then it's rather pointless -- there's a little bit more administrative overhead involved in maintaining separate NS host records.

Any DNS gurus out there have an answer to this? Anyone care to speculate wildly?

Re:Optimising DNS lookup time (1)

speculatrix (678524) | more than 7 years ago | (#19319351)

use the include directive?

Re:Optimising DNS lookup time (1)

benji fr (632243) | more than 7 years ago | (#19319709)

You CAN use glue records, I mean just put your www.example.com A record into the .com root server, and it will answer pretty fast, but you SHOULDN'T do that of course, and nobody in big companies or big ISP do this isn't it -...-

Domain Names sdrawkcaB? (3, Interesting)

mutube (981006) | more than 7 years ago | (#19319685)

When written in ltr language most hierarchies follow that direction. Numbers have the most significant bit(s) at the left, taxonomies are written species:subspecies:variety, pages are identified as home > category > page.

Domain Names are the exception, with the "top level" domain on the right, while the left (most significant bit) can be stuffed with random chaff (a.k.a. subdomains).

I can't help but imagine that this has some impact on how easily people fall for spoofed websites (yourbank.somesite.com vs. com.somesite.yourbank). Being naturally lazy we only read as far down a list as as needed to confirm we have what we're looking for.

Does anyone knows of a historical basis for this decision & do you think it makes any difference?

Re:Domain Names sdrawkcaB? (2, Informative)

NickFitz (5849) | more than 7 years ago | (#19319897)

Tim Berners Lee now thinks he got it wrong; [bcs.org] he now believes that URIs should have had the form http:com/example/blah/, rather than http://blah.example.com/.

Bad idea (1)

TheLink (130905) | more than 7 years ago | (#19325239)

That's a bad idea.

With that, you can't tell where a host name begins and where the URI starts.

And he even thinks that is GOOD:
quote: "This would mean the BCS could have one server for the whole site or have one specific to members and the URL wouldn't have to be different."

Doh.

Say you have a conventional URL of http://blah.example.com/sub/foo
If we do things he proposed in that page how does he expect the browser to find the IP address for the server to go to?

With his suggestion the url will look like:

http:com/example/blah/sub/foo

Now that's very nice in "dreamland" where the speed of light is infinite and everything is perfect.

But in the real world, what domain name should the browser try in order to get the IP address to connect to?

Should the browser try to connect to "com" and fetch /example/blah/sub/foo
Then if that fails connect to:
example.com and try to fetch /blah/sub/foo
Then if that fails connect to:
blah.example.com and try to fetch /sub/foo

AND WORSE, even if that's the correct URL, say the server was temporarily broken/misconfigured, so now the browser is suppose to keep going?
e.g.connect to:
sub.blah.example.com and fetch /foo
then try
foo.sub.blah.example.com

The browser has to wait for the necessary failure timeouts on each try. Don't forget, the URL I used as an example isn't even a very long one. Imagine one with a greater "directory depth".

Make you wonder if he "stumbled" on his _original_ scheme by sheer luck. Or he actually thought long and hard on it, and has now unfortunately forgotten the original reasons why things were done that way.

Nowadays I find there are not very many people who understand how lots of different things work, the various limitations, and how certain choices/changes affect things. There's often so much you need to know AND keep in mind.

Re:Bad idea (1)

turbidostato (878842) | more than 7 years ago | (#19330065)

"http:com/example/blah/sub/foo
Now that's very nice in "dreamland" where the speed of light is infinite and everything is perfect.
But in the real world, what domain name should the browser try in order to get the IP address to connect to?"

Do you know a single word about DNS? I don't think so.

First: we are talking about names, not service resources, so the basic example is looking for com.example.blah.sub.foo, which is just exactly the same than foo.sub.blah.example.com regarding its recursive search path: you either need to recourse for the whole path or you will recieve an answer in the middle via a high level authoritative server or by caching.

Second: regarding services, maybe the SRV-like registers would have seen the light instead of being more or less the DNS curiosity they are today.

So the *real* example:
1) http://www.example.com/some/path [example.com] : you local resolver looks for com authoritatives; they either know the answer or point you to example.com authoritatives which, in turn, will tell you who www.example.com is, and then, it and only it will serve you the /some/path http resource.
2) http://com/example/www/some/path [com] . Your resolver will ask authoritatives for com wich will either know the right answer or will point you to http://com/example [com] , wich in turn will know the answer or point you to http://com/example/www [com] autoritatives which in turn will tell you the answer or point you to http://com/example/www/some [com] authoritatives (if, for instance all http://com/example/* [com] or even all http://com/* [com] pages "live" within a single server that's what the service will tell you, no need to recurse more deeply. If all but http://com/someespecificresource [com] , well, I think you can imagine what will happen: a question for http://com/someespecificresource/somethingelse [com] will recieve a "keep trying" answer instead of a "you win the prize" one).

I think even you will see that's exactly the same currently DNS does, no change here. But now you can do some nice tricks, like via SRV-like records return at any time either the authoritatives for the next hierachy level *or* the IP address for the resource *or* even directly the searched contents (in this case the expected HTML page, or an open conection to the SMTP server or whatever).

In no case there are more latencies than currently and it certainly would make more sense and would potentially open the door to some very interesting things (that they are interesting gives prove the fact that they are actually dirtly done: like having an Apache in retroproxy mode to serve a group of pages that in reality are "living" on a different server -things like this would naturally grow out of a completly left-to-right hierarchy with some afordable changes to the protocol).

Re:Bad idea (1)

TheLink (130905) | more than 7 years ago | (#19333345)

I may be ignorant, but I'm not that ignorant. I know DNS and how it works. And I also know how lots of other stuff works too.

You're wrong because it's not just about DNS. It's about how things work in real life and equally importantly it's about how things _FAIL_ in real life.

TCP connections still take time to make, and there is a significant timeout if the other end is firewalled (ignores TCP/80).

Try this on a linux/*BSD box: time wget http://www.microsoft.com:82/foo/bar/com/baz

See how long that takes to time out. Sure interactive browsers might time out faster, but unlike DNS, they won't timeout after just a few seconds.

And remember after that times out, with the "New Berners" approach you will have to try to fetch:
http://foo.www.microsoft.com:82/bar/com/baz
http://bar.foo.www.microsoft.com:82/com/baz
http://com.bar.foo.www.microsoft.com:82/baz
http://baz.com.bar.foo.www.microsoft.com:82/
And only after all that should the browser give up.

There's also the scenario of trying to access a site that hosts lots of different people's stuff that uses a wildcarded DNS- say the dns works but the site is down - how long do you wait? All of the possible domains will work (you expect the admin to set up a system to put all the valid names of all the sites in the DNS? Esp when previously the customers got to have their own arbitrary sub domain names without any change to the configs).

OK lets say you try to do stuff in parallel, and display the first document that is successfully fetched. But what happens then if you get multiple documents? If a server higher up the hierachy (thus being more heavily loaded and more likely to be "slow") finally responds while the browser is _halfway_ displaying a different file should you suddenly tell the user "Ooops, pretend you didn't see that, here's what you should be seeing". What if you get tons of HTTP/404s? Which 404 should you show? There are pretty fancy 404s nowadays. You think that's bad? What if you get multiple HTTP/302s! Which 302 should you follow? All of them? And risk the problem getting even bigger?

And should the browser do negative caching for all failures? How long?

Sure you can put "don't recurse" stuff on the DNS servers, but in real life, the people who run the webservers often have little authority and control over the DNS servers.

Run the DNS server on the webserver? Despite what some people may like, not every web server be allowed to run an authoritative DNS server on it AND get the firewall administrators to pass DNS traffic to their pet server, nor is it likely that the DNS delegation be correctly done in enough cases for people to say "this system is viable".

Lastly, the "New Berners" approach is trying to merge file namespaces with host namespaces, sure that could work fine in scenarios where one entity controls everything. BUT with this approach you will no longer have an "every host is a peer" situation, it will be a hierachy of hosts. Some hosts WILL override other hosts so you can no longer put stuff in certain namespaces on those hosts. You will start to need cooperation amongst previously independent hosts/peers to avoid undesirable namespace clashes at what was previously a _file_ naming level.

"No more latencies than currently"

You still sure about that?

Re:Bad idea (1)

turbidostato (878842) | more than 7 years ago | (#19343351)

"And remember after that times out, with the "New Berners" approach you will have to try to fetch:
http://foo.www.microsoft.com:82/bar/com/baz [microsoft.com]
http://bar.foo.www.microsoft.com:82/com/baz [microsoft.com]
http://com.bar.foo.www.microsoft.com:82/baz [microsoft.com]
http://baz.com.bar.foo.www.microsoft.com:82/ [microsoft.com]
And only after all that should the browser give up."

Why on hell? For one, it wouldn't be www.microsoft.com but com/microsoft/www. For second, assuming equivalent to current expansions, it would be "www" the one to expand to microsoft/www or com/microsoft/www, exactly like now. I really don't see where the "prefixes for alternate searchs" in your example come from.

"There's also the scenario of trying to access a site that hosts lots of different people's stuff that uses a wildcarded DNS- say the dns works but the site is down"

You are not going to test for all those users stuff at a time, do you? I don't see how can a browser expend more time waiting for http://www.example.com/~givenuser [example.com] than on http://com/example/www/~givenuser [com] . Again the only doubt is knowing wich is the "real server" that holds the content since, having the same semantics about domains and resources, 'a priori' the real host could be com, com/example, com/example/www or even com/example/www/~givenuser. Of course that you would trick out by having standard answers for "keep trying downside" and "here it goes". You only have latency problems *on the resolving process* when you can't reach the nameservers, quite exactly like now.

"OK lets say you try to do stuff in parallel, and display the first document that is successfully fetched"

You assume that the "document" can be on various different sites (even overlapping) but that the DNS won't help you telling where exactly the resource is. You either intermingle protocol and resolution (then the answer comes when it comes and you are deemed to timeout once per tried nameserver *on a single leaf level* -just like now) or you let each other on their side, exactly like now, and then it wil work -well, just like now maybe with the proper adition of some SRV glue http://com/example/www/~someguy [com] , you said? My cache says HTTP server(s) for that name can be found on this IP (and this and this one)", or "I don't know, but you can ask those guys donwside the lane".

"Sure you can put "don't recurse" stuff on the DNS servers, but in real life, the people who run the webservers often have little authority and control over the DNS servers."

So usually, when the manager for the website at www.example.com asks example.com's hostmaster to add a registy for www he usually chooses an IP address out of his hat and the hell with the PHB if it ends up at www.playboy.com instead, is it?

"Run the DNS server on the webserver? Despite what some people may like, not every web server be allowed to run an authoritative DNS server on it"

*Current* implementation doesn't need to do so. Future implementation *might* integrate DNS and data server in a way that makes it the easier way to go, just like *usually* you can find an IMAP server just along with a POP server, simply because it's so easy if nothing else.

"nor is it likely that the DNS delegation be correctly done in enough cases for people to say "this system is viable"."

There are always control nazis that will say so, of course. But how many DNS server are *already*? I bet you'll find in the millions. Whatever nightmare that might happen with delegations in the future should have happened *already*. And remember: either top-down or down-top there will be always above your head able to cut the flux if you don't behave. We have spam because SMTP works in mesh pair-to-pair way; DNS doesn't have these problem because it's hierarchical, and what way or the other it will remain hierarchical.

"it will be a hierachy of hosts"

What do you think is current DNS server topology? It already is a hierarchy of hosts, so no news here too.

"Some hosts WILL override other hosts so you can no longer put stuff in certain namespaces on those hosts."

What do you think that happens *now* if the authority for example.com thinks it's no longer desirable the subdomain.example.com subdomain? Exactly: you will lose reacheability within about 72 hours.

"You will start to need cooperation amongst previously independent hosts/peers to avoid undesirable namespace clashes at what was previously a _file_ naming level."

On one hand you *already* need both *direct* cooperation from upper domain managers (you need active cooperation from example.com hostmaster to make public your pretty subdomain.example.com subdomain) and *implicit* cooperation from your peers (since there can only be one subdomain.example.com either yours or others'). There can't be namespace clashes as long as there's only one authority for any one leaf of the hierarchical tree (or do you think you can currently have two different resources named "http://www.example.com/~someuser/data.html" within the same namespace?).

"No more latencies than currently. You still sure about that?"

Yes. At least you have not offered any single reason to me to think otherwise.

Re:Bad idea (1)

TheLink (130905) | more than 7 years ago | (#19379143)

You said: "Why on hell? For one, it wouldn't be www.microsoft.com but com/microsoft/www. For second, assuming equivalent to current expansions, it would be "www" the one to expand to microsoft/www or com/microsoft/www, exactly like now. I really don't see where the "prefixes for alternate searchs" in your example come from."

Are you for real? I gave the example that way because all the following URLs are expressed in the same way in the "New Berners" approach.

All these "conventional URLs":
http://com:82/microsoft/www/foo/bar/com/baz/
http://microsoft.com:82/www/foo/bar/com/baz/
http://www.microsoft.com:82/foo/bar/com/baz/
http://foo.www.microsoft.com:82/bar/com/baz/
http://bar.foo.www.microsoft.com:82/com/baz/
http://com.bar.foo.www.microsoft.com:82/baz/
http://baz.com.bar.foo.www.microsoft.com:82/

Are all represented in the "New Berners" approach in a single URL as:
http/82:com/microsoft/www/foo/bar/com/baz

Pasting 7 identical lines (ala new berners URL) to try to illustrate 7 possible different browser "attempts" (whether in parallel or not) would be silly. So that's why I stuck to the conventional (in fact I skipped some details and assumed basic knowledge of HTTP and TCP, as well as DNS, which may be the problem).

So even if it's just a single "New Berners" URL the browser still has to _potentially_ try ALL the possible _effective_ combinations in event of failures to fetch the "effective urls" - whether due to DNS failure, or failure to reach/connect to the webserver. Sure you might be able to avoid/workaround some of that by doing various stuff. But why do all that? What would people gain?

"What do you think is current DNS server topology? It already is a hierarchy of hosts, so no news here too"

While the DNS topology is a hierachy of hosts, the current WWW is NOT - it is a _Web_ of hosts. Right now once webmasters get their domain name and IP, they have a lot of freedom in the pathnames and filenames they can use. They will lose that freedom with the "new approach". Instead of just making a directory and putting a file somewhere, they'd have to do a few more dependency checks - e.g. does the new directory or file cause a clash with someone else's subdomain (past/present/planned). More work and cooperation required.

You're in a way proposing breaking the Web- forcing it into a hierachy (in fact tying it tightly to the DNS hierachy) and then using "SRV-like" stuff to glue the broken bits together. I repeat: what would people gain from that? Is it worth it?

Sorry, I'm giving up already, it's too much work explaining the simple stuff already (I'm a lazy person). If it makes you happy, go ahead and assume I'm wrong by default.

Pike's "The Hideous Name" paper from Plan 9 (2, Insightful)

billstewart (78916) | more than 7 years ago | (#19320989)

Rob Pike and Peter Weinberger wrote a paper in 1985 called "The Hideous Name", arguing against DNS's naming order in favor of Plan 9's Unix-like order. Plan 9 very aggressively uses the file system naming structure for everything, and they argue that consistent naming systems are much better than the alternatives, including the relatively new Arpanet naming system that some people were starting to use for email. I haven't read it in a decade or more, but one issue besides the one you mention is that if you do high-level-first names, it gives you a lot more flexibility for localized namespace management, and gets around some of the semantic and political issues with rootedness.

The original paper is available in Postscript at bell-labs.com [bell-labs.com] or Google has an HTML translation.

Evolution (2, Funny)

RancidMilk (872628) | more than 7 years ago | (#19319715)

I hear that the root DNS servers are monkeys. After all, at the root of all tree based architectures is monkeys. (I also hear that if you go to the edge of the internet, you'll fall off the edge of it!)

Re:Evolution (2, Funny)

bromoseltzer (23292) | more than 7 years ago | (#19320587)

Monkeys are the root of all evals?

DNS != BIND (2, Informative)

RedHat Rocky (94208) | more than 7 years ago | (#19320817)

*sigh*

Once again, BIND is associated with DNS and I'm not even past the third paragraph.

Zone transfers are not DNS-related, they are BIND-related! For that matter, the term ZONE is mainly a BIND thing!

Gah!

Re:DNS != BIND (0)

Anonymous Coward | more than 7 years ago | (#19328519)

RFC 1995 and RFC 1996 might seem to indicate otherwise. Zone transfers are an inherent tool in DNS. AXFR and IXFR are defined in the DNS RFC's. Perhaps you should go crawl back under the rock you came from until you can properly discuss the intricacies of DNS with the adults. ..stupid AD admins....

from a satisfied LDAPDNS user

DNS is Fractal (0)

Anonymous Coward | more than 7 years ago | (#19321841)

Take a simple system and iterate it many times to produce a complex result. Sounds fractal to me.

Why is DNS so complicated anyway ? (2, Interesting)

billcopc (196330) | more than 7 years ago | (#19322077)

I often find myself wondering why most internet standards are so complex in the first place. Let's face it: DNS looks up a name in a database and spits out a number. It's like a phone book for the internet (white pages, that is). So then, why the hell is it such a pain to configure with its weird-ass zone files that half the world seems to struggle with, and obscure vulnerabilities like cache poisoning. Why can't it be as simple as "domain = IP" or "I don't know, but server X might" because that's basically what's going on, only it's buried under a pile of nerd filth that all but its originators truly grok.

Here's one big pain in the butt: listing name servers for a domain. Why the hell don't we use IP addresses for those ? Instead you have a chicken and egg situation where you would need to contact ns1.something.tld to ask about its own address, so instead we cheat with "hints" in the parent server's records and end up listing the IP anyway, making the nameserver's name redundant. Things like that make me wonder what the designers were smoking that day. In the end, it's all just a big relational database, only the tables are each stored on different hosts but the links work the same way, so why the big headache ?

Re:Why is DNS so complicated anyway ? (0)

Anonymous Coward | more than 7 years ago | (#19330147)

DNS isn't a complex. BIND is. Most other DNS servers (MS DNS, djbdns, dozens more) act like you'd expect them to.

Re:Why is DNS so complicated anyway ? (1)

NateTech (50881) | more than 7 years ago | (#19361743)

You're mixing up DNS with the implementation of DNS in software, specfically BIND.

There *are* DNS systems that use relational databases which you can query and/or update with standard SQL statements.

BIND and its syntax (while I'm fluent in it and very good at it, so I don't bother with above-mentioned RDBMS-based systems) is NOT DNS.
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...