Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New DoS Vulnerability In All Versions of BIND 9

kdawson posted more than 5 years ago | from the binding-with-briars-my-joys-and-desires dept.

Security 197

Icemaann writes "ISC is reporting that a new, remotely exploitable vulnerability has been found in all versions of BIND 9. A specially crafted dynamic update packet will make BIND die with an assertion error. There is an exploit in the wild and there are no access control workarounds. Red Hat claims that the exploit does not affect BIND servers that do not allow dynamic updates, but the ISC post refutes that. This is a high-priority vulnerability and DNS operators will want to upgrade BIND to the latest patch level."

cancel ×

197 comments

Sorry! There are no comments related to the filter you selected.

fp (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28861549)

ladies, get your pussies ready!

god they should learn programming (1)

lapinmalin (1400199) | more than 5 years ago | (#28861555)

that's pretty bad

Interesting (2, Interesting)

PhunkySchtuff (208108) | more than 5 years ago | (#28861573)

This is very interesting. I'm sure the people behind BIND will scramble to get things sorted out ASAP, but I wonder how long it will take other vendors (Apple, I'm looking at you!) to release a patch.

I do have to wonder about exploits like this that seem initially incredibly serious, yet nothing much comes from them and they don't seem to get exploited to the extent that you might expect they would - this one reminds me of l0pht's famous claim that they can bring down the internet in 30 minutes. If this vulnerability is really as serious as they say, and as easy to exploit as it appears to be then in the wrong hands, this could really be an "internet killer"

Re:Interesting (2, Informative)

d3matt (864260) | more than 5 years ago | (#28861593)

so... any BIND server would be down for a bit... anyone with a caching name server would still be able to surf.

Re:Interesting (2, Interesting)

houstonbofh (602064) | more than 5 years ago | (#28862109)

Only to sites already cached. The more unusual sites would just be all gone. What do you bet http://downforeveryoneorjustme.com/ [downforeve...justme.com] is not cached by your DNS server right now?

Re:Interesting (5, Funny)

Minwee (522556) | more than 5 years ago | (#28862133)

It is now.

This vulnerability also gives the three people running DJB DNS [cr.yp.to] a much needed opportunity for some smugness.

Re:Interesting (5, Funny)

kriebz (258828) | more than 5 years ago | (#28862287)

I was under the impression they had smugness to spare.

Re:Interesting (1)

HARRRRRR (1171221) | more than 5 years ago | (#28862351)

*bzzzzt* sorry pal...

you're assuming nobody follows rfc1912.

also, what happens when the (ridiculously configured) host you're trying to browse goes to do a reverse lookup on your address?

I have my own "patch", called a HOSTS file... apk (-1)

Anonymous Coward | more than 5 years ago | (#28862205)

You know, what with all the DNS poisonings, faults, & other lunacy + madness going on, I decided LONG AGO, circa 1997 or so, to "issue my own patch"... & it works well enough. It comes from "antiquity", & it's called a custom HOSTS file.

I use HOSTS files!

First of all, to block out KNOWN bad sites &/or adbanner servers, for added layered security!

(&, "good banners" here too, because imo? Well, it's widely known that for more than just a few years now, adbanners have been shown as harboring malicious code payloads of many kinds, even MS was hit by it)

AND, because I PAY FOR MY LINETIME, so I can enjoy full speed ahead/HBO style internetting, not to finance others' interests, & it IS MY MONEY - just because I figured out how to get the MOST for it online, doesn't make me evil - just safer, & F A S T E R... lol! How about you?

I also use HOSTS files to 'hardcode' 200 or so my fav. websites into it, so I never even HAVE TO CALL ON DNS SERVERS, which might be poisoned... & it gets me the URL-to-IP address resolution FASTER anyhow too!

I put mine onto a GIGABYTE IRAM (150 SATA 1, 4gb DDR-400 Kingston RAM) for the fastest possible access/seek & constant caching (because this is a dedicated RAM board, for what I tell it to do, only, not like system-wide RAM) and fast reads, & onto an NTFS compressed partition (makes the file as small as it can be on disk due to compression for faster 'pickup', @ least on std. HDD's) + I use the "0" blocking IP address (makes the file small as possible, e.g. mine -> 14mb using 0, 20mb using 127.0.0.1) so it reads & caches faster (especially on 4kb formatted partition, matching cache/memory/filesystem 4kb per pass patterns).

After altering -> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters & the DataBasePath parameter to match the new HOSTS file location on SSD...

Works, fast & does the job. No DNS hassles, or rather, not as many & I use OpenDNS which is, one of the better ones in a lot of ways, vs. std. DNS servers most ISP/BSP's offer anyhow...

(That is, it works great, until a fav. site of mine hardcoded into my HOSTS file (as to its URL to IP address equation/translation) changes its IP Address, usually due to changing HOSTING providers for their site, which is rare!)

Now - MOST sites let you know they are about to do an IP address change usually, anyhow, so you re-ping them & have @ them again, via this method & avoiding DNS port 53 udp requests altogether... And? Doing hardcodes of your favs in a HOSTS file not only protects you vs. poisoned DNS servers, but... again:

Doing 'hardcodes' in a HOSTS file of your fav, sites allows You to:

1.) Resolve URL-to-IPAddress of websites faster - By resolving their IP address from a hosts file hardcode (instead of calling out to a potentially poisoned DNS server... especially lately!), & up to 60ns or so to call out to a DNS server & get a return... from a HOSTS file? Fractions of that... due to disk access speeds & file access seek (faster than 30-60ns by FAR)

2.) YOU CAN STILL GET TO HOSTS FILES HARDCODED WEBSITES of your fav sites, EVEN IF YOUR DNS SERVERS GO DOWN too

3.) You cannot be "logged" by your ISP, not as easily via monitoring your DNS requests over port 53 udp either... Germans may take interest in that, in particular, as it is going on in THEIR nation lately from what I heard here about "A Black Day for Internet Freedom in Germany" here -> http://yro.slashdot.org/article.pl?sid=09/06/16/1657255 [slashdot.org] ...

APK

P.S.=> No DNS problems here... ever, especially as I am not "all over the web", & only regularly hit around 200 sites top/max anyhow! Yes, in a strange way, I am my own "DNS" & better in many ways, from a simple text file - LOL, one that makes me a LOT safer than most, & faster as well, so 'beat that with a stick'...

To quote Ozymandias from "The Watchmen":

"So I resolved to apply antiquities teachings (usage of custom malicious site &/or adbanner blocking HOSTS files) to the world, today, & so began my conquest: Conquest, NOT OF MEN, but, of the evils that beset them - Fossil Fuels (antivirus resident), Oil (antispyware resident), Nuclear Power (VM for security layers), are like a drug, & YOU GENTLEMEN, along with foreign interests (RBN, etc. et al), are the pushers..." - Adrian Veidt (Ozymandias), THE WATCHMEN

All of those? Useful for some, but, once you KNOW what you're doing with these machines? Wholly unnecessary as resident in services &/or trayicon apps, merely speed hits really, with an illusion of perfect security. You can do better doing a guide like this instead -> http://www.tcmagazine.com/forums/index.php?s=87203c9d6d4117d11f30ee4e89cf27d4&showtopic=2662 [tcmagazine.com] & a LOT better (faster & safer).

Should DNS be added too? No. It just needs work. DNSSEC is a way (past TLD & ROOT DNS SERVERS etc. though), IPv6 might help, but, no... DNS is a 'good necessary evil', even for my setup, sometimes.

(So, just use the most currently patched ones then, simple).

Now, I am relatively sure OpenDNS servers will be patched fast enough for this, as, they usually are, & I use them, happily because of knowing that, albeit, lol... sparingly, because of HOSTS files.

And since HOSTS files ARE "from antiquity" in computing?

They work for that "behavioral modification", too, because of a simple principal: "You can't get burned, if you can't go into the kitchen" & don't get slowed down by it either... & that goes for limiting indiscriminate javascript usage (NOScript + AdBlock for MOZILLA/FireFox products, & Opera's native "by site" preferences are perfect here in fact, but there's more for 'layered security', like filtering .PAC files + custom cascading stylesheets & more as well, far more @ the OS + Apps levels)...apk

Re:I have my own "patch", called a HOSTS file... a (1)

ShakaUVM (157947) | more than 5 years ago | (#28862259)

Your post reads like you'll ask for $20 to show people how THEY TOO CAN SET UP A .HOSTS FILE.

Just saying.

Also, your approach is stupid because I like to use the internet.

It's because it works, & I believe in every wo (0)

Anonymous Coward | more than 5 years ago | (#28862483)

See subject-line, & "just sayin", right back @ ya... because, it works, "exactly as advertised" with a 100% free price (especially considering I am not selling a thing & you all have one already, lol).

My approach isn't stupid in regards to that. Free? That's a "pretty good price", wouldn't YOU say? And, you're also FREE to customize it, & thus, YOUR PERSONALIZED VERSION OF A CUSTOM HOSTS FILE, JUST GOES ALONG WITH YOUR PERSONALIZED SPED UP & SAFER VERSION OF THE INTERNET... &, just as YOU see fit & like, easily. Notepad.exe for instance? My gosh - lol, just "does wonders" here, on this account... lol!

(Plus, using HOSTS files makes me FAR faster online, by double just by blocking adbanners (javascript on the rest helps too, IF it is not demanded for full function), as people will attest to that much by the truckload, go to say, mvps.org & see their forums on that note, as 1 example... & it makes me FAR SAFER too).

ALL, from a simple text file no less that you already have as long as you have a BSD derived IP stack, & you most likely do, & that YOU can completely control + customize to your liking, yourself, easily. So can anyone else, for free, same bennies, as long as you can read english & use notepad.exe (in Windows that is on the latter).

Put it this way -> I'll let others speak for me, on this account, instead, via these evidences thereof:

Even "security guru" Oliver Day @ SecurityFocus.com sees using HOSTS as a good thing for added layered security AND MORE SPEED ONLINE -> http://www.securityfocus.com/columnists/491 [securityfocus.com]

AND?? So do folks like "SpyBot Search & Destroy" also (since their app populates not only the HOSTS file, but, also files like Opera's Filter.ini, FireFox's block lists, & IE Restricted Zones also, for LAYERED SECURITY (this is the trend & recommended practice by security folks by the by, myself included))

Hey - Even this slashdotter, sootman, uses one & made many interesting points that support his usage of a HOSTS file, from mvps.org, here -> http://tech.slashdot.org/comments.pl?sid=1300193&cid=28677363 [slashdot.org]

"Also, your approach is stupid because I like to use the internet." - by ShakaUVM (157947) on Wednesday July 29, @12:21AM (#28862259) Homepage

QUESTION: How does going almost double as fast and safer make you not be able to use the internet?

(Thanks for your answer!)

Aha, this "epiphany/revelation" just struck me... lol:

You are merely a reply from, no doubt, a webmaster worried about page adbanner hits, or an ads server marketing man, lol... ok, to that? I can only say, this:

----

The-Next-Ad-You-Click-May-Be-a-Virus:

http://it.slashdot.org/story/09/06/15/2056219/The-Next-Ad-You-Click-May-Be-a-Virus [slashdot.org]

----

That's for readers' reference... & I am certain they too, realize you are either a malware maker/botmaster/hacker-cracker/spyware-virus-rootkit maker, or "money man online" (Both it seems, are profiting by the misfortunes of others basically, by possibly infecting them... and yet making monies from them also for pageviews & adbanner hits...? A good 'hosing' of the customer, & From BOTH ends (literally & figureatively)).

Time for enough of that, I think.

APK

P.S.=> I'll gladly discuss any of this & add to that above too... that's just for starters on this "antiquity" item, being EXTREMELY useful, TODAY, & for better security AND BETTER SPEED, online, today (reliability too it looks like from this article) - & I'll do so, because I love this topic + know it actually works, & WELL!

On this? Hey man, I am, truly, "The LORD OF HOSTS", on the subject of HOSTS files, so glad to entertain any debate on them... apk

Re:I have my own "patch", called a HOSTS file... a (1)

rs79 (71822) | more than 5 years ago | (#28862725)

" Your post reads like you'll ask for $20 to show people how THEY TOO CAN SET UP A .HOSTS FILE "

Still cheaper than a $35 domain from Verisign.

There's no place like 127.0.0.1 (click) There's no (1)

Nefarious Wheel (628136) | more than 5 years ago | (#28862277)

Don't forget to set your hosts file to read only. There's bastards out there who will rewrite it for you. Ads. I have a huge hosts file too. But it's mostly for homing out annoyances. Tip: Use Notepad++ for editing your hosts file instead of standard Notepad. The former preserves the lack-of-extent Hosts requires. The latter adds .txt, and you're stuck shuffling file names around. Nice little editor, too.

0 is smaller & F A S T E R, than 127.0.0.1... (0)

Anonymous Coward | more than 5 years ago | (#28862549)

Using 0, as I do, is F A S T E R & more efficient than using 127.0.0.1 though (your other points are good - mine's been not only WRITE protected, but also ACL protected too (keep THAT in mind, & use NTFS)).

I can prove that to you, via you doing something as simple as loading your HOSTS into a LISTBOX (smaller one, then larger ones, or even a converted blocking address one I go into next) or using a std. compiler's language to do the File Open/Read/Close cycle, using a loop (+ a hi-resolution multimedia timer registered with the system to time it)... you can prove it, yourself, if you code.

My file has 654,000++ entries in it (200 are hardcoded favs, rest are blocked adbanner servers for speed & more security, along with KNOWN bad sites (I can supply sources if you wish, all reputable)).

Using 0, as my BLOCKING IP ADDRESS (vs. adbanner servers or known bad sites), it is only 14mb in size!

Next - going on to 0.0.0.0 instead, though smaller than 127.0.0.1, gets you to 18mb in size on my file...

Using 127.0.0.1 though, the loopback adapter? It uses some CPU afaik, because of what it is (& I am pretty sure 0 &/or 0.0.0.0 are like the NUL port in DOS, pure nada, no cpu usage or not as much), AND, it is larger by far, hitting 20mb on my file (with as many line entries, just converted via a program I wrote for that here, that also removes duplicates from it & pings my favs to keep them current).

Larger files? SLOWER, period... even when accounting for the 4kb sweeps/passes the memmgt/caching/filesystem/disk drivers utilize, because my using 0 vs. larger blocking IP addresses of 0.0.0.0 or 127.0.0.1 makes for shorter lines in a HOSTS file... meaning MORE OF IT GETS PICKED UP, per PASS/SWEEP, each sweep/pass... more mileage, more power, & even more safety. HOSTS are great for it, but doing 0 based IP address ones for blocking only makes them, the BEST they can TRULY be.

APK

P.S.=> I'd also think that since 0 hex = 0 decimal, that the decimal-to-hex & vice-a-versa that goes on shouldn't be necessary on IP addresses like 0, because if you ping a 0 blocked IP in your HOSTS file, you get 0.0.0.0 back though, so not sure on this much, though it might be a GOOD idea for design (127.0.0.1 gets converted & so does 0.0.0.0 iirc, but doing it for 0? Not needed) you save on CPU here & storage too, plus speed gains due to lack of that... LESS IS MORE, & "0"? Accept NO substitute, lol... apk

Re:0 is smaller & F A S T E R, than 127.0.0.1. (1)

shentino (1139071) | more than 5 years ago | (#28862687)

Is it faster for 0.0.0.0 to give you nothing or for 127.0.0.1 to give you a connection refused?

The end result is the same (you don't get there) (0)

Anonymous Coward | more than 5 years ago | (#28862723)

See subject-line, & rinse/lather/repeat...

APK

P.S.=> Because 127.0.0.1 is the "loopback adapter", that means it is 'doing something', even if it only points to "yourself"... it is a "loopback" mechanism. Processing occurs. Afaik? 0 & 0.0.0.0 are like the NUL device in DOS - nowhere, a waste bucket... no processing needed for that, not really.

Using 0 though? Hey, big deal, even IF you are running a webserver (because this causes some minor err msgs on some of them & some config file work can clear that or errmsgs (inconsequential ones really) & most folks don't anyhow, run Apache or IIS or whatever @ home because to do a 'real job' of it, you need a commercial account usually, or they kill you on bandwidth & brick your site, if not eventually)...

0 doesn't do as much processing on disk or as a loopback address either (& iirc, neither does 0.0.0.0 since 0 equates to that but makes for a 25% less sized HOSTS file, & thus, it is faster on disk into memory too because of that & 0, if you think about it, vs. 0.0.0.0 or 127.0.0.1 doesn't even really require a decimal-to-hex conversion really since 0 decimal = 0 hex... that'd be nice to see in the IP stack though because of efficiency if not there already though)! apk

Re:I have my own "patch", called a HOSTS file... a (3, Insightful)

hairyfeet (841228) | more than 5 years ago | (#28862507)

Sounds like a lot of work when you can just run Treewalk DNS [ntcanuck.com] and be done with it.It is fast, uses very little resources (mine is using 5Mb ATM) and never gives a bit of trouble.

Why waste CPU cycles on that vs. HOSTS though? (0)

Anonymous Coward | more than 5 years ago | (#28862615)

Why waste CPU cycles running those, when a HOSTS file does the job & users have one already, PLUS for FAR LESS COSTS cpu-wise, software-wise, etc. et al (takes zero cpu cycles, as it is not a program, but more or less a guard-filter & speed upper for favorites, that you already own too).

"Sounds like a lot of work when you can just run Treewalk DNS and be done with it.It is fast, uses very little resources (mine is using 5Mb ATM) and never gives a bit of trouble." - by hairyfeet (841228) on Wednesday July 29, @01:11AM (#28862507)

Sure, that might work & there are many alternate local DNS servers & such one can use, but per my subject-line above? Well... that & my p.s. explain my stance on it. I go faster & safer, using a little text file & an editor... it's THAT simple, & inexpensive (costs & cpu cycles wise + RAM usage possibly)...

Right now? Well... I just do NOT trust the Domain Name System like I used to, especially because of articles like this one. That includes BIND, & really, any others too. Sometimes, yes, I have to use them, even with a HOSTS file, but I minimize that, hugely & use the ones that patch first.

APK

P.S.=> Plus, seeing all this DNS poisoning, redirections, & other shenanigans such as Dan Kaminsky found last year/this year, or, this article's points too? No thanks... no offense intended, but, no thanks! apk

Re:I have my own "patch", called a HOSTS file... a (1)

shentino (1139071) | more than 5 years ago | (#28862635)

Any ISP's DNS that mucks about with NXDOMAIN is by definition not standard.

Your point is.... what? (0)

Anonymous Coward | more than 5 years ago | (#28862695)

See subject line - are you talking about OpenDNS, or something else...?

(If about OpenDNS - Well, I didn't say they were 'standard' by any means, if so, show us where I did please, thanks... & - What I like about them, for what little I use them for anyhow, is that when Dan Kaminsky found the hassles in BIND last year/this year? They were patched, a.s.a.p.)

IF that is what you're referring to... that is. I must admit, I am not really sure what you mean here or in regards to what...

APK

P.S.=> I'll be awaiting your reply, & Thanks for your time... (but, if you were not addressing me, & you did so by accident (which happens)... then that's cool, forget about it)... apk

Re:Your point is.... what? (1)

shentino (1139071) | more than 5 years ago | (#28862743)

Just like I said in my post, I'm talking about ISPs (lookin at YOU charter...) that supply a malicious DNS server.

I'm no ISP/BSP, & not w/ charter, but... (0)

Anonymous Coward | more than 5 years ago | (#28862775)

You DO "hear tell" of what you state though... especially the past couple years, & yes, here on this website.

(I see what you mean now, I thought you meant ME, lol... or, OpenDNS!)

I hear some of what they do is redirect banner requests or search filtering (even OpenDNS does the latter, iirc, via opendnsguide.com), but don't QUOTE me on this much, it is only operating on memory (yes, lol, more than "640K: ALL A BODY NEEDS!", lol), so the details are a bit dim on the exacting details of what little I recall... why?

I rarely really USE DNS servers, even the non-ISP/BSP ones (much less my ISP/BSP's, which are ok afaik) like OpenDNS... because of HOW I use my HOSTS file in addition to knowing my regular "surfing patterns"... as far as hardcodes, & I am certainly NOT resolving many adbanners, lol, this is sure (I go faster this way, I pay for my linetime, I want ALL of it) & I am not hitting bogus sites, because I keep this file up, daily ESPECIALLY vs. that much.

APK

P.S.=> HOSTS files, & OpenDNS do the job for me (the former? Probably a GOOD 95% of the time & F A S T, & as efficient as possible, per the format, layout, & placement of it I use)... apk

Modded down? Why?? At least say why cowards... (0)

Anonymous Coward | more than 5 years ago | (#28862939)

See my subject-line above, because the TRUE "anonymous cowards" are the ones with the mod points who mod others down, but say nothing as to WHY specifically... &, if you're going to mod my post down, won't you @ least show the "intestinal fortitude" to give reasons why I am in error (I am not), or what you disagree with @ least? Thanks for your time (even a detractor's time, because you MAY have points that are reasonable, which would look better than just modding me down for no reason given, I would think @ least).

APK

P.S.=> Ah, but then? Sometimes?? Perhaps I expect "too much"... lol, "TOO EASY"... apk

Re:Interesting (1)

rs79 (71822) | more than 5 years ago | (#28862703)

" This is very interesting. I'm sure the people behind BIND will scramble to get things sorted out ASAP, but I wonder how long it will take other vendors (Apple, I'm looking at you!) to release a patch. "

I'd be less concerned about that than I would be about how long it will take for people to do something about this on their nameservers. IMO the best update to BIND is DJBDNS but that's just me.

Either way, there are FIVE HUNDRED THOUSAND nameservers out there. Some of them still run Bind 4.7.

Use Unbound or NSD (5, Informative)

nwmcsween (1589419) | more than 5 years ago | (#28861589)

I don't want to bash BIND but it has had a fair amount of sec issues (well a lot), try unbound or nsd instead http://unbound.nlnetlabs.nl/ [nlnetlabs.nl] http://www.nlnetlabs.nl/projects/nsd/ [nlnetlabs.nl]

Re:Use Unbound or NSD (5, Informative)

medlefsen (995255) | more than 5 years ago | (#28861789)

or djbdns. We use it where I work and other than a slight adjustment to djb-land it has been wonderful. I know people appreciate how powerful BIND is and maybe some people need that. I suspect though that most people just need their DNS servers to serve their DNS records or provide a caching DNS server for local lookups and for that BIND seems to be bloated and insecure.

Re:Use Unbound or NSD (1)

buchner.johannes (1139593) | more than 5 years ago | (#28862209)

for dns caching, dnsmasq is nice too, but I'm not certain that it has a good security history.

FOSS is bettar!!! (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28862419)

If only BIND were open source, theis would never have happened.

Oh wait...

Re:Use Unbound or NSD (2, Interesting)

abigor (540274) | more than 5 years ago | (#28862661)

PowerDns for the win. Plus it reads legacy BIND zone files.

Well.. (2, Funny)

TechyImmigrant (175943) | more than 5 years ago | (#28861603)

Well DNS operators do appear to be in a bit of a bind don't they?

Re:Well.. (0)

Anonymous Coward | more than 5 years ago | (#28861619)

They would agree, but keep making errors in their assertions.

Ain't what it used to be.... (3, Interesting)

mcrbids (148650) | more than 5 years ago | (#28861609)

Was once the day whe a notice like this would kick off a flurry of migrationn plans, compiler scripting, compiling, and restarting servers in the dead of night. (and bonuses to match!)

But now?

# yum -y update && shutdown - r now

Sometimes I pine for the 'good old days'. A little. (ok, hardly at all)

Re:Ain't what it used to be.... (1)

MichaelSmith (789609) | more than 5 years ago | (#28861623)

You seem to be just taking all changes and rebooting. I do that all the time on my ubuntu laptops but I wouldn't manage my servers that way.

Having said that patching in netbsd will require a compilation at my end. It would be nice if I could just update a package. The infrastructure is right there for it...

Re:Ain't what it used to be.... (3, Informative)

secolactico (519805) | more than 5 years ago | (#28862369)

You seem to be just taking all changes and rebooting. I do that all the time on my ubuntu laptops but I wouldn't manage my servers that way.

More so because some package managers (such as CentOS) tend to replace customized init.d files with the stock ones (renaming the ones you had). This is not really a big deal, but it sometimes breaks some services.

Re:Ain't what it used to be.... (1)

Elshar (232380) | more than 5 years ago | (#28862619)

I see that there's several versions of BIND in the pkgsrc binary packages tree, wouldn't a new patched one show up there fairly quickly? That would solve you having to recompile anything. Not that BIND generally takes a long time to compile on fairly modern hardware..

Re:Ain't what it used to be.... (1)

MichaelSmith (789609) | more than 5 years ago | (#28862747)

You are right but I would have to find a clean way to uninstall the built in one. Otherwise I might pick up part of the wrong version. I think the debian approach of putting much or all of the base system inside built in packages makes upgrades a lot easier.

Re:Ain't what it used to be.... (4, Informative)

ScytheBlade1 (772156) | more than 5 years ago | (#28861625)

I'm just hoping that CentOS pushes out the update before 10:00 PM MST today.

Why?

So I'll get my daily e-mail status update, telling me to do just that: run yum, and then restart (just bind) -- as opposed to seeing it tomorrow.

As a footnote, it is generally a good thing to subscribe to whichever vendor's security-announce list that you use. It is really nice getting e-mail notifications of security-related package updates. CentOS has one, right here: http://lists.centos.org/mailman/listinfo/centos-announce [centos.org]

Re:Ain't what it used to be.... (4, Insightful)

lordkuri (514498) | more than 5 years ago | (#28861629)

Why in the holy hell would you reboot a server to put a new install of BIND into service?

Re:Ain't what it used to be.... (4, Insightful)

palegray.net (1195047) | more than 5 years ago | (#28861727)

Because modern-day admins don't know how to restart a service?

Oh, wait, these are fellow Linux "admins" we're talking about...

Re:Ain't what it used to be.... (1)

QuoteMstr (55051) | more than 5 years ago | (#28861787)

The strange thing is that he used shutdown -r now instead of this newfangled reboot the kids like to type. If you know what shutdown does, you should know when to not use it.

Re:Ain't what it used to be.... (2, Funny)

houstonbofh (602064) | more than 5 years ago | (#28862117)

Remember when "shutdown -rfn" would work? Ahh... The days of youth.

Re:Ain't what it used to be.... (0)

Anonymous Coward | more than 5 years ago | (#28862301)

init 6

Re:Ain't what it used to be.... (3, Funny)

FishWithAHammer (957772) | more than 5 years ago | (#28862911)

I never heard that one, but please tell me it stands for "Right Fucking Now."

No need to restart bind after updating using yum (2, Informative)

dusanv (256645) | more than 5 years ago | (#28861913)

It gets restarted automatically. Check system.log.

Re:Ain't what it used to be.... (1)

mcrbids (148650) | more than 5 years ago | (#28862963)

Because modern-day admins don't know how to restart a service?

Oooh! Oooh! I think I can get this one! Either of these should work:

# service named restart;
# /etc/rc.d/init.d/named restart;

But... if you have a properly designed network, why the **** wouldn't you reboot your name server? Given that there are minimally TWO of them registered for your domain name, that the DNS protocol is designed to seamlessly fail over in the event of a failure, rebooting the name server will have no discernible effect for any end user, but will provide assurance that all libraries and settings have taken full effect, as the O/S vendor intended.

I have 4 name servers, and move them around as needed to ensure low-latency, redundant connections. Fault tolerance is most important. Any server or network can go down and still result in my ability to change DNS and publish globally on short notice in the event of a severe outage. A single nameserver being down for the ~ 1-2 minutes it takes to reboot is a non-issue.

Downtime: 0

Peace Of Mind: 1

You tell me, (ahem) ninja super-admin-who-knows-how-to-(re)start-a-service?

Re:Ain't what it used to be.... (0)

Anonymous Coward | more than 5 years ago | (#28861833)

Must be a Windows user.

Re:Ain't what it used to be.... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28862017)

Typical RedHat moron. I mean hell, they can't even pick a decent distro.

Always do a reboot test ... (4, Insightful)

ZeekWatson (188017) | more than 5 years ago | (#28862089)

If you're running a serious server you should always do a reboot test after installing any software. I've been burned many times by someone doing a "harmless" installation only to find out 6 months later a critical library was upgraded with an incompatible one (a recent example is expat 2.0) and the server doesn't boot like it should.

Always reboot! Even with the super slow bios you get in servers nowadays it should only take 2 minutes to be back up and running.

Re:Always do a reboot test ... (1)

DNS-and-BIND (461968) | more than 5 years ago | (#28862323)

So...with linux, you should always reboot upon applying any sort of application update. I weep for the future of our computing race.

Re:Always do a reboot test ... (2, Interesting)

Vancorps (746090) | more than 5 years ago | (#28862705)

Why? You're DNS servers are clustered and load balanced right? rrright? Those of us that need our infrastructure up don't think twice about rebooting even during the day! A golden age we live in indeed when I can just take the server out of the load balancer rotation, apply updates, perform reboot rest, and then put it back into rotation repeating the steps for all servers in the cluster.

Re:Ain't what it used to be.... (2, Insightful)

Antique Geekmeister (740220) | more than 5 years ago | (#28862169)

Because you may have a stack of other pending updates, particularly kernels, and this has been the first "gotta switch" update in quite some time for those core servers? Also because without the occasional reboot under scheduled maintenance, it's hard to be sure your machines will come up in a disaster. (I've had some gross screwups in init scripts and kernels cause that.)

Re:Ain't what it used to be.... (1)

c0y (169660) | more than 5 years ago | (#28862179)

Because the OP probably had a lingering kernel update anyway. They come out with enough regularity that, despite having been current on my boxes sometime in the last two weeks, I found another one after returning from vacation this weekend. It wasn't critical and not worth taking the time for immediate action on. Still, I'm not that brave. I like to examine yum a little more closely.

Re:Ain't what it used to be.... (1)

DeathElk (883654) | more than 5 years ago | (#28861731)

And hope to hell you've got some sort of LOM for when your server doesn't come back up.

Re:Ain't what it used to be.... (0)

Anonymous Coward | more than 5 years ago | (#28861783)

On Debian, apt-get restarts services that it updates. I would expect yum to do the same.

Re:Ain't what it used to be.... (1)

Olmy's Jart (156233) | more than 5 years ago | (#28861869)

This isn't Windows...

# yum -y update named\* && service named restart

(Not sure if yum [or apt] would restart named and NOT willing to take the chance.)

Pray it comes back (1)

russlar (1122455) | more than 5 years ago | (#28861969)

# yum -y update && shutdown -r now

and pray to FSM that it comes back up.

All versions of Bind 9? (2, Funny)

Yvan256 (722131) | more than 5 years ago | (#28861631)

Good thing I'm using FreeDOS!

At least someone agrees that BIND 9 had issues... (2, Interesting)

bogaboga (793279) | more than 5 years ago | (#28861637)

According to this document [ripe.net] , BIND 9 has issues including being monolithic, having a "Bad Process Model", Hard to Administer and Hard to Hack. That's not a good reputation to have.

To some extent, these issues apply to everything Linux save for the last point. I am waiting for the time these points will not apply to Linux and its associated software.

I must say that understanding BIND's configuration file was not that easy for me at first but after trying several times, I can say I am almost an expert. Things can be made simpler though. A text based interactive system could be of a lot of help. Tools like Webmin come in handy too though they require that a system be running initially.

Re:At least someone agrees that BIND 9 had issues. (1)

Anonymous Coward | more than 5 years ago | (#28861715)

Difficult compared to what? DJBDNS is much more difficult to wrangle. It's really not that bad if you attempt to learn it.

Re:At least someone agrees that BIND 9 had issues. (5, Informative)

profplump (309017) | more than 5 years ago | (#28861865)

Recent versions of BIND (8+) are not terrible to administer, and have much more reasonable data files. Older version were *really* nasty, and had a data file format so complicated that we invented a dedicated zone-transfer mechanism just so people could send DNS data to each other.

And while djbdns uses an unconventional admin system with lots of environmental variables, that's a one-time setup (that is probably done in large part by your package manager) and the actual data files are dead-simple -- plain text, one record per line, can do DNS lookups at build time, can concatenate files, etc. There are valid complaints to be made about djbdns, but I don't think "difficult to wrangle" is one of them.

Re:At least someone agrees that BIND 9 had issues. (1, Informative)

Anonymous Coward | more than 5 years ago | (#28862153)

Recent versions of BIND (8+) are not terrible to administer

Try configuring dynamic DNS through nsupdate with a shared secret.

If you have an NS key, you can specify the key on the command line, or you can store the key in a file, and pass the filename.

The former is a security risk (as anyone running 'ps' can see your key). The latter? Well, someone decided that it would be a good idea to hard code metadata in the filename (even though the same metadata must be present inside the file too.) Oh, and you need two files, even though it's only using one. Oh, and you need to name the key the same as the zone in your named.conf.

Considering that I've only ever seen that level of idiocy from first year comp-sci majors, I have to wonder at the technical competence of the people in charge of writing BIND.

Re:At least someone agrees that BIND 9 had issues. (2, Informative)

rs79 (71822) | more than 5 years ago | (#28862923)

" Older version were *really* nasty, and had a data file format so complicated... "

Rememeber that this was a product of the early 1980s; Brian Reid, Director of Digital Equipment Corporation's Network Systems Laboratory ("decwrl.uucp") hired a kid, Paul Vixie, to take the buggy Berkley B-tree code and turn it into something resembling professional software. At the time even C was not even close to ubiquitous, Assembler was though and in fact the great majority of code written for the early microprocessor based systems of that era was written in assembly.

So it should not be any great shock that bind config files looked like assembly code, or that the later versions looked like C.

Frankly I found the earlier bind config files much easier to use, and the djbdns config files even easier (once you get used to them) to use, and (much) more importantly, you can write a program to manipulate these datum very easily. It's ugly and complicated with bind data files of any version.

Only effective against MASTERS... (5, Informative)

Olmy's Jart (156233) | more than 5 years ago | (#28861653)

From the advisory: "Receipt of a specially-crafted dynamic update message to a zone for which the server is the master may cause BIND 9 servers to exit. Testing indicates that the attack packet has to be formulated against a zone for which that machine is a master. Launching the attack against slave zones does not trigger the assert."...

So an obvious workaround is to only expose your slave DNS servers and to not expose your master server to the Internet. That's part of "best common practices" isn't it? You have one master and multiple slaves and you protect that master. Come on, this is pretty simple stuff. Just simple secure DNS practices should mitigate this. Yeah, if you haven't done it that way to begin with, you've got a mess on your hands converting and it's easier to patch. But patch AND fix your configuration.

Re:Only effective against MASTERS... (1)

jurv!s (688306) | more than 5 years ago | (#28861711)

agreed. ++

Re:Only effective against MASTERS... (1)

Jurily (900488) | more than 5 years ago | (#28861967)

That's part of "best common practices" isn't it?

Two posts up there is someone mentioning a reboot to solve this. Best practices seem like rocket science around here...

Re:Only effective against MASTERS... (1)

totally bogus dude (1040246) | more than 5 years ago | (#28862011)

Hmm, both of my public servers are 'masters' because the zones are synced via rsync over SSH from an internal server which actually has the master copy of the zones. However as far as bind is concerned, they public-facing ones are masters.

I could potentially trick it into thinking it's a slave zone but seems too fiddly/risky, so I'll just wait for it to be patched. Nagios will tell me if they stop working, anyway.

Re:Only effective against MASTERS... (1)

Olmy's Jart (156233) | more than 5 years ago | (#28862045)

Perhaps you should rethink that mistake and create a real "master" and make them "slaves". The system was designed this way for a reason. It baffles me why people do things this this way.

Re:Only effective against MASTERS... (4, Insightful)

raddan (519638) | more than 5 years ago | (#28862151)

Because lots of people don't want intruders being able to affect the actual zone data in case an outward-facing DNS server gets compromised. Using SSH to transfer zone data is much easier and more secure than BIND's own zone transfer mechanisms (e.g., you can automate and schedule them), and you don't have to worry about zone transfers through firewalls. Troubleshooting all the weird crap that can happen between different DNS daemons all supposedly doing regular AXFRs is a real pain in the ass. SSH makes life easier.

If having a DNS machine on the Internet that thinks it is a master really is a mistake, when then, BIND9 is a piece of shit. This is the most straightforward thing a DNS daemon should be asked to do.

Nowhere in BIND's manual does it say people have to use BIND in a master/slave setup.

Re:Only effective against MASTERS... (1)

psyclone (187154) | more than 5 years ago | (#28862391)

Copying zone files over ssh means you then have to rndc reload/reconfig every time you change a single A record.

With a "normal" hidden master + slaves setup, at least you can send Notifies which will cause the slaves to query the master and update the zone without a reload. Also, this is the only sane way to provide secondary DNS for a trusted third party.

If you have a lot of zones, it can take a while to reload bind. If you only have a handful of zones, and you don't do secondary DNS, I'm sure reloading is quick.

Re:Only effective against MASTERS... (1)

kju (327) | more than 5 years ago | (#28862569)

There is no need to reload all zones. You can easily detect which zonefiles have changed since the last reload and do "rndc reload ".

Re:Only effective against MASTERS... (2, Informative)

totally bogus dude (1040246) | more than 5 years ago | (#28862637)

As kju responded, you can reload on particular zones if you want. The logs seem to suggest that bind itself only actually reloads the zones which have changed (i.e. mtime is newer than the last time it was loaded). I only get messages that it's loading every zone if I actually restart bind (stop and start), telling it to reload I only get messages about zones that have actually been changed.

I haven't noticed any performance hit from doing a simple reload, but I only have 120 zones.

If we were supplying secondary DNS for an (un?)trusted third party then yes I'd use bind's zone transfer mechanism. But we don't so it's not an issue - we only serve DNS for things we host/manage ourselves.

Re:Only effective against MASTERS... (2, Informative)

Fastolfe (1470) | more than 5 years ago | (#28862945)

So I'm responding not because I disagree with your conclusions, but I disagree with the logic you're using to justify them:

Because lots of people don't want intruders being able to affect the actual zone data in case an outward-facing DNS server gets compromised. ...
If having a DNS machine on the Internet that thinks it is a master really is a mistake, when then, BIND9 is a piece of shit. This is the most straightforward thing a DNS daemon should be asked to do.

You start off with a reasonable statement (that you don't generally want compromised DNS servers to allow for the modification of data), but then you say bind9 is a piece of shit because it's a best practice that the masters (which hold the data) shouldn't be exposed to the public. Which is it?

Using SSH to transfer zone data is much easier and more secure than BIND's own zone transfer mechanisms

Would you care to elaborate on that? Doesn't TSIG secure zone transfers? TSIG is just as easy to set up as SSH keys are.

(e.g., you can automate and schedule them)

How much more automated can you make automatic zone transfers? What better scheduling of zone transfers than when the zones are modified?

you don't have to worry about zone transfers through firewalls

The only thing you need to open through the firewall is TCP and UDP port 53. Most firewalls make this easy, because "Serve DNS through the firewall" is a common configuration for firewalls.

Troubleshooting all the weird crap that can happen between different DNS daemons all supposedly doing regular AXFRs is a real pain in the ass. SSH makes life easier.

SSH makes life easier for someone that understands SSH, and does not understand DNS or firewalls.

That being said, there are valid reasons you might not prefer to run a DNS master as the source for your slaves/shadow masters, and SSH might even be a good way to push your zone files out to those machines, but you have not provided any of those reasons.

Re:Only effective against MASTERS... (1)

totally bogus dude (1040246) | more than 5 years ago | (#28862305)

I do it that way mostly because I didn't previously consider "type master" to be a potential vulnerability (they don't have dynamic DNS or anything fancy enabled). Maybe it is time I looked into djbdns, now that it's no longer a pain in the ass to install.

As for not using the built-in zone transfer method, that's partly because I don't particularly like it, but mostly because I don't see any reason to allow access to our internal hosts from our DMZ unless absolutely necessary -- and this is not a case where it's "absolutely necessary". My own sync mechanism ensures that all transfers are initiated from the internal host rather than from an untrusted public facing server, and the content DNS servers are always up to date.

Having a play now, it seems pretty feasible to configure it as a slave but not use bind's zone transfer mechanism, using 127.0.0.1 as the master. The only issue is almost all my domains were immediately considered expired since the zones are only updated when they're actually changed. I can sort of work around that by setting the expires time really high, but it appears to now be used as the time to cache NXDOMAIN results which could have some unpleasant side effects. It seems touching the zone file solves that... so maybe I can schedule a job to touch them and reload bind each day?

I guess it's doable, but it seems like a lot of hoops just to avoid the software's built-in stupidity. Maybe it really is time to switch to something else. Thanks for the advice.

Re:Only effective against MASTERS... (1)

Fastolfe (1470) | more than 5 years ago | (#28862981)

DNS queries are not encrypted, so if you believe the contents of your DNS zones should be secret, you'd better hope nobody queries them. You may be interested in TSIG, which can authenticate your secondaries to your master. If you'd prefer to store and manage your zone files "offline", pushing them out to one or more masters through SSH or something might be the right thing to do, but if you already have an internal master, and need to update some public-facing slaves/shadow masters, there's no reason to re-invent DNS zone transfers.

Upgrade the damn thing! (0, Flamebait)

mongrol (200050) | more than 5 years ago | (#28861663)

Honestly, why do they insist on running such an important backbone infrastructure piece on a no longer support Microsoft operating system is beyond me.

For goodness sake upgrade.... (4, Funny)

syousef (465911) | more than 5 years ago | (#28861669)

...to Windows! DOS is just so 80's and 90's it's not funny.

(Suggested mod: +1 funny)

Re:For goodness sake upgrade.... (1)

Anonymous CowHardon (1605679) | more than 5 years ago | (#28861819)

Si, creo que tres o cuatro seria mucho mas moderno.

Re:For goodness sake upgrade.... (1)

syousef (465911) | more than 5 years ago | (#28861857)

Si, creo que tres o cuatro seria mucho mas moderno.

I' m apesadumbrado, no hablo español (solamente me utilice Babelfish)

Re:For goodness sake upgrade.... (2, Funny)

Sicarul (1440309) | more than 5 years ago | (#28861905)

hahaha automatically translated Spanish is so funny (Spanish is my mother language) Though, i don't know what he meant, he said "Yes, i think three or four would be much more modern"... i don't see how it applies to it's previous post... three or four windows? O.o

Re:For goodness sake upgrade.... (1)

Anonymous CowHardon (1605679) | more than 5 years ago | (#28861949)

I was referring to DOS. If you don't get it, ask your mother.

suggested mod -1 historical (0)

Anonymous Coward | more than 5 years ago | (#28861983)

+1 hysterical

djb (4, Funny)

dickens (31040) | more than 5 years ago | (#28861721)

Somewhere I think djb [cr.yp.to] is managing to both smile and raise his eyebrows simultaneously.

Re:djb (1)

siddesu (698447) | more than 5 years ago | (#28861773)

came for the djb mention, leaving satisfied.

/ yes, I am.

Re:djb (1)

rs79 (71822) | more than 5 years ago | (#28861845)

Praise be to Dan and may peace be upon him.

Re:djb (0, Flamebait)

DNS-and-BIND (461968) | more than 5 years ago | (#28862219)

Uh, actually, having an acquantance with the man: he is probably slobbering, shouting obscenities at rival Open Source teams, having hurtful paranoid fantasies about how the NTPD team is out to get him, and considering how hateful his next rant against people who oppose him should be.

Maybe this is inaccurate - let's ask the New York Times for a more nuanced profile.

Re:djb (1, Informative)

Anonymous Coward | more than 5 years ago | (#28862677)

None of that changes the fact that his software is several orders of magnitude more secure than the competition.

Him being an asshole doesn't change any of that and the constant harping on about it smacks of resentment and an inferiority complex.

LDAP based Zone updates (1)

Zombie Ryushu (803103) | more than 5 years ago | (#28861795)

This is a reason why I want to be able to do LDAP based zone updates.

Re:LDAP based Zone updates (1)

Olmy's Jart (156233) | more than 5 years ago | (#28861815)

How would that help with this? You don't even need dynamic updates enabled for this to be exploited.

Servers behind Firewalls (2, Insightful)

Bilbo (7015) | more than 5 years ago | (#28861831)

It's unlikely that, if you're running a DNS server inside of your private network, someone on the outside is going to be able to hit it. But then, like all other vulnerabilities, you combine this one with a couple of other attacks (such as a non-privileged login), and all of the sudden you've got something really dangerous. :-(

Re:Servers behind Firewalls (2, Insightful)

Olmy's Jart (156233) | more than 5 years ago | (#28861923)

A server behind a firewall does not imply a server on a private network. You can have firewalls in front of a DMZ on a public address providing services. Firewalls are used for much more than merely "private networks". Those are two orthogonal issues.

OTOH... A master on a private network providing zone feeds to slaves on various other networks (firewalled or not) on public addresses would be a very good idea.

Re:Servers behind Firewalls (1)

Antique Geekmeister (740220) | more than 5 years ago | (#28862213)

Please remember that most "private" networks aren't. They have laptop or VPN access to potentially compromised hosts, which may insert attacks from behind your typical firewalls. I've had considerable difficulty explaining this to management who have, effectively, been lied to for years by their own staff who refuse to accept responsibility for the existing insecure mess, and who are uninterested in the unglamorous and unpopular work of fixing it.

Re:Servers behind Firewalls (0)

Anonymous Coward | more than 5 years ago | (#28862155)

Uh, and for some reason you're not concerned about attacks from the private network? I hope you don't actually administer systems for a living.

Okay, I read the ISC alert. (1, Troll)

mmell (832646) | more than 5 years ago | (#28861951)

They're right. This is a major exploit, especially in view of the fundamental nature of name services to the internet. With repeated application (or by combining with DDoS techniques) I could see holding an entire domain down for an extended period of time. Now, then . . .

Only a fool would configure public-facing DNS servers as masters, although I've seen it done. Only the king of the land of fools would put his domain's real DNS master on a public-facing network. Thus, only domains administered by fools should be directly affected. Darwin for teh win!

Re:Okay, I read the ISC alert. (1)

Tetch (534754) | more than 5 years ago | (#28862051)

> Only a fool would configure public-facing DNS servers as masters

While I must agree with your basic assertion here [if not BIND's :-)], something that is often disregarded by non-security folks is that security threats can arise from within the organisation ...

It only takes one malicious employee to bring in an attack tool from outside - I haven't seen any exploit PoC code for this, but such a tool might consist of 100 lines of C and a C compiler.

Re:Okay, I read the ISC alert. (1)

mmell (832646) | more than 5 years ago | (#28862165)

It's the same old story - the only truly secure system is disconnected from tne network, powered down and disassembled - and even then, I wouldn't bet my life on it to be absolutely secure!

No alerts from normal channels? (0)

Anonymous Coward | more than 5 years ago | (#28861985)

I have not received any alerts from the normal channels via email, such as US Cert, SANS, etc.. but i clearly see they have the alert posted. 8 hours in and IBM/ISS does not have a block signature we can deploy.
I noticed the brief post to NANOG and began my research and deployed.
The update to the ports tree hit shortly after the ISC update so there must be some chatter out there.

We have updated our DNS servers, do you think we can expect another upgrade with a better fix like the last round of updates?

FYI: we have not seen an attempt to be exploited, i expect this to have changed by morning.

OMG... (5, Interesting)

Garion911 (10618) | more than 5 years ago | (#28862119)

I reported a bug *very* similar to this back in Oct, and only now its coming to light? WTF? I submitted this back in january and it was rejected. Ah well. Here's my page on it: http://garion.tzo.com/resume/page2/bind.html [tzo.com]

Average User (0)

zonker (1158) | more than 5 years ago | (#28862177)

Does your average user have anything to worry about here? Or is this really only a concern for businesses that run their own DNS servers?

iptables to the rescue (5, Informative)

kju (327) | more than 5 years ago | (#28862459)

For a quick "fix":

iptables -A INPUT -p udp --dport 53 -j DROP -m u32 --u32 '30>>27&0xF=5'

Will block (all) dnsupdate requests.

Will CentOS 4 be updated? (1)

inject_hotmail.com (843637) | more than 5 years ago | (#28862553)

Does anyone know if CentOS 4 will have an update for BIND to ver 9.4.3-P3, 9.5.1-P3 or 9.6.1-P1?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>