Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Faster Updates for DNS Root Servers Arrive

CowboyNeal posted about 10 years ago | from the quicker-name-picker-upper dept.

Announcements 150

Tee Emm writes "VeriSign's DNS Rapid Update notice period (as announced on NANOG mailing list) expires today. Beginning September 9, 2004 the SOA records of the .com and .net zones will be updated every 5 minutes instead of twice a day. The format of the serial number is also changing from the current YYYYMMDDNN to a new one that depicts the UTC time." We first mentioned this back in July, but it's finally launching now.

cancel ×

150 comments

Sorry! There are no comments related to the filter you selected.

fp (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#10199256)

?asdf sdf sdaf

dynamic dns (5, Interesting)

Anonymous Coward | about 10 years ago | (#10199261)

So when will they be added support for dynamic IP addresses a la dyndns etc. That would be great.

Re:dynamic dns (5, Informative)

numbski (515011) | about 10 years ago | (#10199336)

It's already there [wieers.com] .

The catch of course is that you have to be running bind locally to make it work. Which is fine if you're a unix-head and know how to work dns, but for the average joe, it's far from simple. I have a perl script that checks my Linksys firewall's IP every half hour, and if it's changed, updates the dns file, then runs nsupdate.

Re:dynamic dns (2, Interesting)

BenFranske (646563) | about 10 years ago | (#10200317)

That solution is not really as nice as DynDNS. I for one would really like to see a piece of OSS that lets you operate using the (documented) DynDNS protocol so that the standard update scripts widely availible for that would work. Running a nameserver on a system that doesn't require one seems counterproductive. Plus, you could use existing software to keep Windows boxes up to date as well. The DynDNS update protocol is availible here [dyndns.org]

Re:dynamic dns (4, Funny)

robertjw (728654) | about 10 years ago | (#10200642)

Which is fine if you're a unix-head and know how to work dns

I don't think anyone actually knows how to work dns. It's one of those magic things that you hack for a couple hundred hours and it finally does what you want it to - like qmail.

Re:dynamic dns (1)

drinkypoo (153816) | about 10 years ago | (#10200711)

To be fair, you don't need to be running BIND locally. You can also be using Windows 2000 for a DHCP and DNS server and get local dynamic DNS updates. It helps to use Active Directory as well. While for most people this isn't going to end up being all that much easier than rolling it up with Linux, it IS easier, and it IS a possibility. Of course, paying for 2k Server is kind of a stumbling block for most people, even those who have a second machine upon which they could be running BIND. And of course, you can run BIND on a lot less machine.

Re:dynamic dns (0)

Anonymous Coward | about 10 years ago | (#10199404)

dyndns.org [dyndns.org] Not a job for the root servers...

Re:dynamic dns (5, Informative)

two-tail (803696) | about 10 years ago | (#10199410)

Services provided by the likes of DynDNS are not affected by this. The changes mentioned in this article affect top-level servers, which maintain lists of registered domains and their name servers. Providing an actual IP address is provided in the next level down. For example, here is the complete path that you would go through to get an IP address for www.slashdot.org:

1: a.root-servers.net (refers request to tld2.ultradns.net)
2: tld2.ultradns.net (refers request to ns1.osdn.com)
3: ns1.osdn.com (returns 66.35.250.150)

Adding and deleting domains causes changes at #1 and #2. Changing the name servers assigned to a domain also happens at #1 and #2. Changes to an IP address (like the IP address for www.slashdot.org), which is what DynDNS and the like covers, would take place at #3.

One last note: If you have a domain already in place, and you want to change its nameservers over to DynDNS (possibly to take advantage of their dynamic update service), then #1 and #2 would get involved (since you're changing a nameserver). Under the system being phased out, that would have given you a day-long delay.

Re:dynamic dns (2, Insightful)

RollingThunder (88952) | about 10 years ago | (#10199976)

Not quite - this would theoretically allow you to now also host your DNS zone on a system with a dynamic IP, as you can now get a change to the root-level NS records in short order.

I sure wouldn't want to try that, though....

Re:dynamic dns (1)

Delphis (11548) | about 10 years ago | (#10200127)

Ah hell, I have a few spare domains that it might be fun to try it with.

The problem is the registrars though, are they going to accept/allow people to change their information that quickly? Also, there's the process of actually performing the change. Sounds like lots of crap with CURL to get that to work - and then they'll change their site and break your DNS :>

The update process for most registrars is log in, enter/click on domain, change options, save. Depending on THEIR processes then on the registrars side it could be a while before they even send the update to the name servers, it could take an unfeasible amount of time to get your new IP usable on the internet again. Updating the root-level SOA records is the last piece of the puzzle.

Re:dynamic dns (2, Funny)

JJahn (657100) | about 10 years ago | (#10200329)

Might I recommend using IPCop on an old PC as a firewall/NAT device for your home network? It contains the ability to automatically update your IP address to dyndns and several other dynamic services. Its also a nice firewall product, which is free (as in beer and speech).

That's YOUR opinion! (-1, Offtopic)

Anonymous Coward | about 10 years ago | (#10199262)

I disagree. aoeuaoeu aoi aoeiuoioeioeuo ,.y oeu

For all registrars, or just some? (3, Interesting)

two-tail (803696) | about 10 years ago | (#10199268)

I remember hearing about this, but I don't remember exactly: Is this available to all registrars, or is there something that needed to be done on their end to get their updates in quickly?

Re:For all registrars, or just some? (2, Informative)

numbski (515011) | about 10 years ago | (#10199347)

Looks to me like it requires a conformity to the new serial number spec (which, if I might say BLOWS...I run an ISP and I appreciate being able to look at a DB file and know when the last time I changed it was by simply looking at the serial...ugh), otherwise it will just sort of 'happen'. So long as your dns server is authoritative for a domain and your root-hints file is correct.

Anyone have further input?

Re:For all registrars, or just some? (3, Interesting)

WhiteDeath (737946) | about 10 years ago | (#10199489)


AFAIK the serial number has only ever been in the format of YYYYMMDDNN as a reccomendation. There is nothing in the spec preventing you from numbering versions from 1.

Changing to a UTC timestamp in seconds is no big issue, but for conformity, it's nice if everyone does the same thing, or at least knows what everyone else is doing, especially if you have some software trying to make sense of it all.

Re:For all registrars, or just some? (1)

Lizard_King (149713) | about 10 years ago | (#10199659)

$ perl -e 'print scalar localtime '

While it may suck b/c you might have to change some workflow stuff at your ISP, it shouldn't be too difficult to write a script that produces a readable log of DNS changes.

Re:For all registrars, or just some? (1)

iamcadaver (104579) | about 10 years ago | (#10200212)

perl -e 'print scalar localtime '
returns:
Thu Sep 9 10:19:44 2004
whereas
date +%s
returns:
1094739487

Re:For all registrars, or just some? (1)

frozen_crow (71848) | about 10 years ago | (#10199624)

this is a change to the com and net nameservers. It has nothing to do with the domain name registration process, other than that such registrations (or changes to existing domains) will make it into the com and net nameservers faster. Assuming that your registrar doesn't dawdle, that is...

hmm, but is this really a good thing? (5, Insightful)

The Pi-Guy (529892) | about 10 years ago | (#10199271)

as I understand it, this would allow for propogation of new domains to be completed faster. this is *theoretically* a good thing, but it means that applications cannot cache DNS as effectively for nonexistant domains. this may end up causing a *lot* heavier load on the root DNS servers. much as we'd all love that functionality (who doesn't want to see their new domain a few minutes after they buy it?), there was a reason why they designed it the way they did.

Re:hmm, but is this really a good thing? (3, Insightful)

fingon (114710) | about 10 years ago | (#10199284)

It's not very good thing. At least compliant DNS implementations will be doing 144x as much traffic with them as before (assuming infinite load; of course, in practise they will have bit less load).

I don't see the point myself, domains are not supposed to change every minute anyway.

Re:hmm, but is this really a good thing? (5, Informative)

LiquidCoooled (634315) | about 10 years ago | (#10199365)

If I remember rightly, the new system does not change the TTL, it is still down to the domain administrator to pre plan domain moves.

On the day before you move, your TTL can be dropped to this 5 minutes so your address can be changes with minimal disruption. After the move, once your stable, your TTL can be increased once again, and network congestion is minimalised.

Of course, I could be talking out of my arse, one of you lot will put me right if this is the case.

Re:hmm, but is this really a good thing? (0)

Anonymous Coward | about 10 years ago | (#10199986)

You're quite correct.
Sadly most moronic system administrators & hosting companies forget this rather often, obviously resulting in excessive downtime.

Re:hmm, but is this really a good thing? (5, Informative)

Entrope (68843) | about 10 years ago | (#10199401)

Your claim of "144x as much traffic" exhibits an ignorance of how DNS caching works -- not that I should be surprised by the ignorance of anything I read on Slashdot. Specifically, caching is controllable independently of zone revision. It is easy to instruct clients to cache negative replies for a longer time than that revision of the zone is current. The only way to increase the frequency of lame requests is to reduce the TTL or SOA MINIMUM values.

On top of that, maximum-frequency error responses are only a problem when you have enough headstrong or automated users to see requests for the SAME misspelled domain name just past the SOA MINIMUM (or TTL, if appropriate) time. It is not a problem for valid name requests, since they have separate TTLs. While that frequency of lame requests is indeed a valid assumption with infinite load, in practice, only the largest ISPs will see anything that approximates that traffic.

Your comment that domains are not supposed to change every minute is correct for some domains; but the particular domains in question (TLDs) do change every minute as new domains are registered or expire. (Other things, like DHCP-driven dynamic DNS, can also legitimately cause frequent zone updates.)

Re:hmm, but is this really a good thing? (1)

BurritoWarrior (90481) | about 10 years ago | (#10199293)

Why can they not cache it same as always? You do a lookup on a domain at X, you can keep it cached for X + however long you wish.

Re:hmm, but is this really a good thing? (0)

Anonymous Coward | about 10 years ago | (#10199294)

And there is also a reason why THEY are changing it to the NEW way they are doing it.

Re:hmm, but is this really a good thing? (4, Insightful)

ewithrow (409712) | about 10 years ago | (#10199300)

DNS was designed in the lat 70's, with RFC's appearing in the early 80's. The computational power today is vastly greater than what the routers of the 80's could contend with. I'm sure they would not implement this change if they had not thoroughly outweighed the costs and benefits.

Oh wait, VeriSign? We're all doomed.

Re:hmm, but is this really a good thing? (0, Redundant)

leperkuhn (634833) | about 10 years ago | (#10199312)

yes, because 20 years ago computeres were slow pieces of shit.

Re:hmm, but is this really a good thing? (3, Insightful)

LostCluster (625375) | about 10 years ago | (#10199321)

This will be a Good Thing(TM) if the DNS root servers can handle the load. Of course, if they can't it'll have to go in the Bad Idea(TM) file.

The key thing comes down to if we can trust VeriSign to be doing their homework correctly. VeriSign's a very funny company to think about because their entire product line is based on encryption and ID services that define VeriSign as a root of trust... if you don't trust VeriSign to be an honest actor, practically everything they do becomes worthless.

It's so hard to get trust-based systems to work these days...

Re:hmm, but is this really a good thing? (2, Insightful)

Gsus411 (544087) | about 10 years ago | (#10199720)

Geeze. Why is everyone talking about the "root servers?" This isn't . (root zone), this is com. and net.! The two are not the same thing!

Re:hmm, but is this really a good thing? (5, Informative)

Mordac the Preventer (36096) | about 10 years ago | (#10199333)

This is *theoretically* a good thing, but it means that applications cannot cache DNS as effectively for nonexistant domains. this may end up causing a *lot* heavier load on the root DNS servers.
No, it's the TTL that determines how long a record can be cached for. Updating the zone more frequently just means that the information will be available sooner. It will not increase the load on the root nameservers.

Re:hmm, but is this really a good thing? (0)

multipartmixed (163409) | about 10 years ago | (#10199342)

> > it means that applications cannot cache DNS as effectively for nonexistant domains

> No, it's the TTL that determines how long a record can be cached for.

Out of idle curiosity, in which zone file do you set the TTL for the non-existant domain you're about to search for?

Re:hmm, but is this really a good thing? (1)

cortana (588495) | about 10 years ago | (#10199645)

I think you set it in the SOA record of the parent domain.

target ttl IN SOA domain "responsible person" serial refresh retry expire "nxdomain cache time"

Re:hmm, but is this really a good thing? (2, Informative)

sw155kn1f3 (600118) | about 10 years ago | (#10199678)

It's simple:

# dig a alksasdasdqweqwehqwe.com

com. 10793 IN SOA a.gtld-servers.net.
nstld.verisign-grs.com.
1094 735719 --- serial
1800 --- refresh
900 --- retry
604800 --- expiry
900 --- minimum aka "default" for this domain.. it's the time for NXDOMAIN responses too

Re:hmm, but is this really a good thing? (1)

Coppit (2441) | about 10 years ago | (#10199381)

Nobody said the applications have to update every five minutes. They can still update infrequently, for the same quality of service (and cost) as before. Or am I missing something?

Re:hmm, but is this really a good thing? (1)

swordboy (472941) | about 10 years ago | (#10199400)

this may end up causing a *lot* heavier load on the root DNS servers.

Maybe the guys at bittorrent should start a rogue P2P DNS serving system. If it worked well enough, it would become a defacto standard.

Re:hmm, but is this really a good thing? (5, Informative)

SirCyn (694031) | about 10 years ago | (#10199704)

Let me clarify a few misconceptions.

1. The "minimum time" set to 15 minutes means the servers will not check for an update on a record until it is at least 15 minutes old.

2. The 5 minute transfers. This is how often the root servers check with each other. This has nothing to do with any other server. Not the registars, not your ISP's DNS server; only the root servers.

3a. The serial change from yyyymmddnn to Unix epoch time makes perfect senese. And no, it does not suffer the 32-bit problem. Serial numbers can be much more than 32 bits. Heck the yyyymmddnn takes 8 bits per character now, so 80 bits just for that. Dare I guess how far into the future an 80-bit Unix time would go (if it was stored that way)?

3b. If this serial change screws up your DNS Cache server simply flush the cache, problem solved. If you have some application (as suggested in the memo) that relies on the serial you need to update your software, now.

4. Whoever suggested this as a backup plan for having only one server run your whole opperation: You are dumb. Now go away or I shall taunt you a second time.

5. The TTL for a standard DNS entry is not going to change. So if your ISP's DNS server caches an entry it will (probably) keep it the same amount of time as it did before. (I say probably because most DNS severs can update records before their TTL expires).

Would the people who do not know how DNS works please stop posting your misinformation and speculations. Thanky you!

Re:hmm, but is this really a good thing? (5, Informative)

Kishar (83244) | about 10 years ago | (#10200178)

3a. The serial change from yyyymmddnn to Unix epoch time makes perfect senese. And no, it does not suffer the 32-bit problem. Serial numbers can be much more than 32 bits. Heck the yyyymmddnn takes 8 bits per character now, so 80 bits just for that. Dare I guess how far into the future an 80-bit Unix time would go (if it was stored that way)?


You're correct on all counts except this one.

From RFC1035:

SERIAL The unsigned 32 bit version number of the original copy of the zone. Zone transfers preserve this value. This value wraps and should be compared using sequence space arithmetic.


The YYYYMMDDxx way can't be used past 2148, the UTC way can't be used past 2038. (neither way breaks it, because the serial number wraps to 0)

Great! (0, Funny)

Anonymous Coward | about 10 years ago | (#10199281)

The spammers will love it.

Fantastic. (4, Funny)

John_Allen_Mohammed (811050) | about 10 years ago | (#10199283)

This will probably help speed things up on the ogg-streams-over-dns p2p radio stations. Some complain that DNS wasn't designed for these purposes but generally, the same people complaining are the ones raising kids now, using viagra and getting ready to wear diapers again.

Technology adapts to changing circumstances and trends, old folks do not.

Why? (2, Insightful)

tuxter (809927) | about 10 years ago | (#10199287)

Is there any real need for this? Realistically it is going to have very little impact on the average user.

Re:Why? (3, Informative)

mr_z_beeblebrox (591077) | about 10 years ago | (#10199461)

Is there any real need for this? Realistically it is going to have very little impact on the average user.

This will affect DNS customers not consumers. DNS is a purchased service (not a product) Businesses are its customers, users are its' consumers. Verisign wants to make a positive impact on its' customers to turn more revenue.

In other news (4, Funny)

Anonymous Coward | about 10 years ago | (#10199290)

Slashdot has announced they will begin posting stories every twenty seconds, instead of every hour.

Says CowBoy Neil, "Well, we figured at the increased rate, we could dupe stories at twice the usual rate. And also... uh... we could use my name in twice as many polls."

Reached for comment in his mother`s basement, Commander Taco said only, "DNS, smenesh, I think we all want to see GNNA update their trolls!"

Re:In other news (1)

tuxter (809927) | about 10 years ago | (#10199363)

Bring on the next slow news day! Now we can ./ 300% more sites... w00t!

Root Servers... (5, Interesting)

jmcmunn (307798) | about 10 years ago | (#10199291)


So I don't exactly get it, but is this just the root servers that are going to be updating every five minutes? I read the links, but it still doesn't seem clear to me. I mean, if my registrar (or dns service or whatever) still only send in their updates once every day, this won't really help me as much right?

Of course, once they do send it in I will still get it updated an average of 6 hours faster I guess. Just curious, since the details were a little vague to us non-dns folks.

Re:Root Servers... (1)

Guitar Wizard (775433) | about 10 years ago | (#10199357)

"I mean, if my registrar (or dns service or whatever) still only send in their updates once every day, this won't really help me as much right?"

That's exactly my line of thought...even still though, this should (hopefully) allow for quicker DNS updates across the board. I know that when I first learned about DNS related stuff I was hesitant to experiment with things because of how long it would take for the changes to propogate (it was usually about 12 hours+ before my changes completed throughout DNS parent servers and their children).

Re:Root Servers... (1)

frozen_crow (71848) | about 10 years ago | (#10199657)

you are correct. if your registrar only sends in changes once a day, then your changes won't make it into the dns very quickly. most registrars who operate in such a batch mode timed it so that they'd hit the update window, so you probably won't really see your changes any faster than they already are. This move may encourage registrars of all stripes to move to more of a dynamic model of updating, however.

Re:Root Servers... (1)

Guanix (16477) | about 10 years ago | (#10200019)

Yes, but most registrars update live.

Re:Root Servers... (5, Informative)

jabley (100482) | about 10 years ago | (#10200228)

This has nothing to do with the root servers [root-servers.org] . The slashdot article is inaccurate.

Verisign are publishing delegations in the DNS from their registry for the COM and NET domains much more frequently than they were before. The TTL on records in the COM and NET zones is not changed.

The affected nameservers are a.gtld-servers.net through m.gtld-servers.net. These are not root servers. They are authority servers for the COM and NET zones.

Verisign also runs two root servers (a.root-servers.net and j.root-servers.net). There has been no announced change in the way A and J are being run.

Speed up attacks? (3, Interesting)

two-tail (803696) | about 10 years ago | (#10199292)

Would this make it easier to slip false transfers through whatever nets may exist to catch them (as in this news byte [theregister.co.uk] )? I guess false transfers such as this would be noticed by the public at large sooner, so that's not too bad.

Re:Speed up attacks? (1)

LostCluster (625375) | about 10 years ago | (#10199335)

True, but I don't see how the DNS system's delay-created waiting period protected much from fraudulent transfers of domains. Afterall, you wouldn't know a false transfer took place until your DNS server got the bad news too...

Dupe (-1, Redundant)

Anonymous Coward | about 10 years ago | (#10199319)

Its still a Dupe, CowboyNeal.

increase of (mostly useless) traffic exptected? (0, Interesting)

Anonymous Coward | about 10 years ago | (#10199338)

how about all those bazillion other nameservers, that would always reask for data every 5 minutes, as the dns records expire much more frequently now.

is verisign and the other dns-rootservers able to cope with the load, or the internet in general?

Re:increase of (mostly useless) traffic exptected? (3, Informative)

Tenareth (17013) | about 10 years ago | (#10199420)

Just because they are refreshing the roots every 5 minutes doesn't mean they dropped the TTL to 5 minutes. Since most DNS servers do not cache bad domains, this just means that new domains become available faster, and propogate within 10 minutes or so.

Emergency use (1, Insightful)

pubjames (468013) | about 10 years ago | (#10199350)


This is great use for emergencies. You can have a backup web server configured identically to the main one. If the first web server goes down, just update the IP address in the domain record and your back on-line in five minutes.

Good for those of us which host web sites for clients.

Re:Emergency use (2, Informative)

Anonymous Coward | about 10 years ago | (#10199379)

you can already do this, the root servers basically just know the address of a nameserver designated to a domain.

this just helps if you want to switch nameservers within 5 mins

on top of that if you have a standby box bring it online with the old ip

Re:Emergency use (4, Informative)

autocracy (192714) | about 10 years ago | (#10199385)

Wrong way about it. Your DNS records in the [.com .net .org .whatever] domain only point to your NS records. You should have multiple name servers up anyway (peering agreements for DNS are usually pretty easy to get). It is your A records that point to the web server, and the update for that takes place upon your own servers.

Re:Emergency use (1)

frozen_crow (71848) | about 10 years ago | (#10199703)

OBPedant: You're correct in saying that this is the wrong way to go about it, but incorrect in suggesting that the com/net nameservers only hand out NS records. If an NS record points to a name that is inside the zone you're looking up, the com/net nameserver *also* has to hand out a glue (A) record for that name. It generally only happens in the case of a misconfiguration, but people have in the past put web and mail server A records into the com/net zones. Such a record will trump whatever's in the authoritative nameserver's zone file.

Re:Emergency use (1)

pubjames (468013) | about 10 years ago | (#10199386)


Not only that, but you can have them with completely different hosts, even in different countries.

I've seen big businesses who have lost their web sites for days because of the hurricane...

Re:Emergency use (2, Informative)

Eggplant62 (120514) | about 10 years ago | (#10199389)

I think you mean that this would be more handy for sites who lose a DNS server. Note that if the machine in an NS record for a domain goes dead, the domain can be left unresolvable until the root servers update. Now with every five second updates on the root servers, change the NS records and yer back up and running.

Happened to me with my vanity domain when afraid.org was cut off for about 8 hours due to abuse issues. His upstream provider cut him off due to spammers hosting DNS there and he had to take steps to get back online. Meanwhile, my domain was unresolvable. I ended up putting up secondaries to prevent this from happening again.

Re:Emergency use (5, Informative)

LostCluster (625375) | about 10 years ago | (#10199390)

What's the point in that?

The record in a DNS root server never is meant to identify your web server, it's meant to indentify your primary and secondary DNS server, and it's those servers that work for you (or at least the ISP you work with) to identify your web server.

So, if you want fallover if your main web server goes down, you just need to update your local DNS record, not the one at the root servers. It's when your DNS servers explode that the five-minute updates would be helpful.

Re:Emergency use (1)

pubjames (468013) | about 10 years ago | (#10199402)

What's the point in that?

Yep, my bad. I hope someone with a clue mods me down again!

Re:Emergency use (4, Informative)

ostiguy (63618) | about 10 years ago | (#10199423)

This isn't that. You are talking about regular DNS A record changes on your dns server. You could have done what you sought a year ago, or 10. This is about what DNS servers are responsible for your domain, among other domain level changes (responsibility, etc) - if Chicago burns to the ground, Schlotsky's House of Bacon, having lost their headquarters with its server room, could then outsource its DNS, enter records, and make a root change to indicate that schlotskyshouseofbacon.com's dns servers have changed within 5 minutes (ideally).

ostiguy

Re:Emergency use (0)

Anonymous Coward | about 10 years ago | (#10199454)

Even if you did switch the record, most chaching nameservers will probably screw you anyway. Probably half of the people who visit a site regularly will end up pointing in the wrong spot until the cache is updated. I've seen it take days for some caching nameservers to update themselves. I've seen one mail server that was still getting mail WEEKS after a DNS change that pointed mail to a new server.

Re:Emergency use (1)

gtoomey (528943) | about 10 years ago | (#10199675)

No, the way to do that is to have a DNS server with small TTL (time to live) to switch IPs. Some cheap DNS Services [dnsmadeeasy.com] allow you so set TTL, or you can run your own.

Cool.... (4, Insightful)

Eggplant62 (120514) | about 10 years ago | (#10199364)

Now spammers can rotate through domains faster than ever before!!

This has no effect (4, Insightful)

warrax_666 (144623) | about 10 years ago | (#10199459)

on how many domains a spammer can register over time -- for much the same reason that you can still have huge bandwidth even if your latency is crap. It's just a question of reducing the initial delay from registration to activation.

Re:This has no effect (1)

Eggplant62 (120514) | about 10 years ago | (#10199886)

This has no effect on how many domains a spammer can register over time -- for much the same reason that you can still have huge bandwidth even if your latency is crap. It's just a question of reducing the initial delay from registration to activation.


No, but it certainly allows them to now rotate nameservers for their domains quickly. Imagine where they've got a number of nameservers for their domains setup, and in order to make it more difficult to determine where the nameservers are hosted, they bounce them around every five minutes from one machine to the next, possibly rotating through as many as 600 different machines in a day!

Misuse? (1)

superhoe (736800) | about 10 years ago | (#10199382)

What effect will this have on DNS hijacking and similar hacking methods which utilize DNS? Will it be easier as things get more 'rapid'?

Re:Misuse? (1)

FooAtWFU (699187) | about 10 years ago | (#10199869)

If it does, I would imagine that it would also make it easier to change *back* rapidly. You'd likely also notice sooner- the servers would change within 5 minutes instead of half a day later. Good luck getting the bureaucracy to recognize your complaint, however...

In case of slashdot effect... (5, Informative)

bruceg (14365) | about 10 years ago | (#10199394)

Upcoming change to SOA values in .com and .net zones

* From: Matt Larson
* Date: Wed Jan 07 17:49:43 2004

VeriSign Naming and Directory Services will change the serial number
format and "minimum" value in the .com and .net zones' SOA records on
or shortly after 9 February 2004.

The current serial number format is YYYYMMDDNN. (The zones are
generated twice per day, so NN is usually either 00 or 01.) The new
format will be the UTC time at the moment of zone generation encoded
as the number of seconds since the UNIX epoch. (00:00:00 GMT, 1
January 1970.) For example, a zone published on 9 February 2004 might
have serial number "1076370400". The .com and .net zones will still
be generated twice per day, but this serial number format change is in
preparation for potentially more frequent updates to these zones.

This Perl invocation converts a new-format serial number into a
meaningful date:

$ perl -e 'print scalar localtime 1076370400'

At the same time, we will also change the "minimum" value in the .com
and .net SOA records from its current value of 86400 seconds (one day)
to 900 seconds (15 minutes). This change brings this value in line
with the widely implemented negative caching semantics defined in
Section 4 of RFC 2308.

There should be no end-user impact resulting from these changes
(though it's conceivable that some people have processes that rely on
the semantics of the .com/.net serial number.) But because these
zones are widely used and closely watched, we want to let the Internet
community know about the changes in advance.

Matt
--
Matt Larson
VeriSign Naming and Directory Servic

beep (-1, Offtopic)

DNS-and-BIND (461968) | about 10 years ago | (#10199403)

I don't really care about this thread, I just thought it would be wrong if I didn't post anything here.

Fifteen minutes? (4, Insightful)

semaj (172655) | about 10 years ago | (#10199412)

From the linked NANOG posting:
"At the same time, we will also change the "minimum" value in the .com and .net SOA records from its current value of 86400 seconds (one day) to 900 seconds (15 minutes). This change brings this value in line with the widely implemented negative caching semantics defined in Section 4 of RFC 2308."
Doesn't that mean they're updating every fifteen minutes, not every five?

Re:Fifteen minutes? (3, Informative)

frozen_crow (71848) | about 10 years ago | (#10199574)

no, it does not. it just means that if a resolver receives a "no such name" response from one of the com or net nameservers, that "no such name" response will only be cached for 15 minutes instead of a day.

Re:Fifteen minutes? (2, Insightful)

bfree (113420) | about 10 years ago | (#10200193)

It means that dns servers which act like bind4 and bind8 will set the default Time To Live (TTL) for resource records without explicit TTL to 15 minutes. Servers which behave like bind9 will use this as the negative caching value for the domain, meaning that if it requests an ip from a domain which doesn't exist it will cache the result for 15 minutes. In effect this should mean that the actual root dns servers will be updated every 5 minutes, but someone looking for the domain (by normal means as oppossed to manually querying the root servers) just before the update which brings the domain into existance will have to wait 15 minutes before they will see the domain has arrived.

So they are updating every 5 minutes, but if you are adding a new domain, as opposed to changing the authoritative servers for a domain, you will have to wait 20 minutes (5 for update and 15 for everyone to have lost the negative cache) before you can say "we're up and running".

Re:Fifteen minutes? (2, Informative)

bfree (113420) | about 10 years ago | (#10200245)

Ooops, it's not quite as described above! The root servers aren't being updated any quicker, it's just the .com and .net servers. It doesn't impact on the above though as the root servers just hand out the ip addresses of the authoritative servers for the top level domains, so for a non existant domain name the root servers will behave just the same as an existing domain name in the same tld.

Way to go on the reading, sherlock! (1)

Galadhrim (768259) | about 10 years ago | (#10199417)

Quote from VeriSign's website:
"VNDS is scheduled to deploy on September 8, 2004 a new feature that will enable VNDS to update the .com/.net zones more frequently to reflect the registration activity of the .com/.net registrars in near real time."
Quote from /.:
"Beginning September 9, 2004 the SOA records of the .com and .net zones will be updated every 5 minutes instead of twice a day."
Seems that someone got excited and got sloppy!

International Date Format (5, Interesting)

Compact Dick (518888) | about 10 years ago | (#10199418)

It's about time the switch was made -- here's why ISO 6601 is the way to go [demon.co.uk] .

Re:International Date Format (0, Funny)

Anonymous Coward | about 10 years ago | (#10199472)

I'm a geek so I can't get dates, you insensitive clod!

Re:International Date Format -- typo (1)

Compact Dick (518888) | about 10 years ago | (#10199593)

Pointing out the obvious -- that's ISO 8601, not ISO 6601.

Re:International Date Format (1)

danharan (714822) | about 10 years ago | (#10200465)

As noted in the first article you linked to, it's quite bizarre that the ISO asks you to pay money to get a copy of the standard. When shit like this happens, couldn't one of the internet standards organizations publish their own (compatible) standard?

worx for me.. (0)

Anonymous Coward | about 10 years ago | (#10199457)

'net evolution just made a nth power jump..

if a bag of glass falls in the desert, does it make a sound?

Root servers? (4, Informative)

bartjan (197895) | about 10 years ago | (#10199467)

These faster updates are not for the root servers, but for the .com/.net gTLD servers.

2038 fun (2, Insightful)

martin (1336) | about 10 years ago | (#10199477)

Oh great so now DNS gets potential issues with 32 bit time-since-epoch problem

Brilliant move...:-(

What was wrong with sticking extra hour/minutes digits in the serial number - no y2k style problems at all....?!?

ie YYYYMMDDHHmmNN ??

Re:2038 fun (1)

frozen_crow (71848) | about 10 years ago | (#10199599)

that would make the digit string too long.

it doesn't really matter anyway, since zone serial numbers are allowed to wrap. secondaries understand how to handle this event as well, so there's no need for admins to step in and do anything in such cases, either.

Re:2038 fun (0)

Anonymous Coward | about 10 years ago | (#10199668)

True, the DNS will handle whatever you put in the 32-bit SOA version number field. If there is any problem, it's that Verisign's formal specification (number of seconds since 1 Jan 1970) can't be adhered to after the year 2106, when that will require more than 32 bits. Ok, so they have a century to learn modulo arithmetic...

Re:2038 fun (2, Interesting)

gclef (96311) | about 10 years ago | (#10199705)

They just said they were encoding the serial number as the seconds since epoch. They never said anywhere how many *bits* they're using to measure that. In fact, since the serial number is a free-form text field, there's not really any way to overflow that. The epoch overflow shouldn't affect this.

Re:2038 fun (1)

gclef (96311) | about 10 years ago | (#10200029)

I know it's bad form to reply to my own post, but I was semi-wrong, so I should fess up to it. RFC1035 states that the serial number field is 32 bits, but can wrap. The exact text is:
SERIAL The unsigned 32 bit version number of the original copy of the zone. Zone transfers preserve this value. This value wraps and should be compared using sequence space arithmetic.
So, there still isn't an epoch problem, but for a different reason.

Re:2038 fun (1)

martin (1336) | about 10 years ago | (#10200141)

Ok I mean there is the potential for 32 bit issues, depending on how well the DNS servers (bind, tinyDNS etc) handle the serial number once its converted from a text string to a number..

just means one more risk/piece we have to check for when the epoch time rolls over the 32nd bit...

Re:2038 fun (4, Informative)

amorsen (7485) | about 10 years ago | (#10200515)

I have no idea where people got the idea that the serial number is a text field. It is a simple 32 bit integer. However, it is supposed to be compared using "sequence space arithmetic". This has been defined in RFC 1982 [sunsite.dk] . Basically it means that overflows are fine, as long as no secondary nameserver keeps really old revisions around. So if you make a secondary for the .com zone now, unplug it for 40 years, and plug it in again, it may fail to get the latest zone.

Re:2038 fun (1)

aboyko (16319) | about 10 years ago | (#10200525)

How do you mean, serial number is a free-form text field? BIND zone files are not the same thing as DNS. See RFC 1035 -- serial is 32 bits.

Re:2038 fun (1)

aboyko (16319) | about 10 years ago | (#10199774)

What was wrong with it? RFC1035 is what was wrong with it. The serial field in the SOA is 32 bits, sparky.

But it's good that you pointed it out to them, because it otherwise might not occur to anyone.

Re:2038 fun (1)

Cecil (37810) | about 10 years ago | (#10200816)

Right, because YYYYMMDDHHmmNN can fit in a 32 bit integer with no problems at all.

4294967295 (max unsigned 32-bit number)
20040909090201 (sample of YYYYMMDDHHmmNN)

Lucky me (1)

IKEA-Boy (223916) | about 10 years ago | (#10199656)

My IP address just got changed 2 hours ago because I switched to a different ISP. I have a nameserver based on my own domain that is registered in the root servers and I expected the IP change to take a couple of days. But when I changed the IP of my nameserver (in the godaddy web interface) I was surprised to see it reflected after only a few minutes:

$ dig @a.gtld-servers.net a ns.XXXXX.net ;; ANSWER SECTION:
ns.XXXXX.net. 172800 IN A 62.216.XXX.XXX new IP

Very nice indeed! Now if I could only get zoneedit to accept the notifies my DNS server sends them...

Re:Lucky me (0)

Anonymous Coward | about 10 years ago | (#10200113)

I think you will find the root servers updated quickly, but other DNSes on the net will cache old entries for up to a day.

This feature should not be used for an excuse to avoid planing and testing that is all too common in the click-kiddy generation.

What about spammers? (1)

thewalled (626165) | about 10 years ago | (#10199695)

Doesn't that also mean that spammers running their own DNS servers will now be able to change nameservers at will :-(, also beating spf in the process.

Just my point of view. maybe I'm wrong.

- dhawal

Hell Yeah! (3, Interesting)

CptTripps (196901) | about 10 years ago | (#10199829)

This is something that should have been taken care of YEARS ago. It'll make it a LOT easier to switch people over to new servers/change IP addresses and such.

Can't wait to go......switch some IP addresses.... ::: not neerly as exciting when you type it out like that :::

Wow, netsol moves into the 80's (1)

Trailer Trash (60756) | about 10 years ago | (#10199923)

Do they have a web site yet?

I fucking hate Verisign (-1, Troll)

drewzhrodague (606182) | about 10 years ago | (#10200057)

'Nuf Said. Verisign sucks ass.

Things that are Certain (2, Funny)

Mixel (723232) | about 10 years ago | (#10200335)

Death, Taxes and DNS Propagation Delay.

Sooner? (1)

boatboy (549643) | about 10 years ago | (#10201042)

I registered a domain last week w/ godaddy.com, and was quite suprised when it was available within about 10 minutes. The domain went to the correct host from a variety of ISPs and PCs -meaning it wasn't just my ISP or my PC. Any chance this system could already be in place?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>