DNS based Website Failover Solutions? 39
Chase asks: "I run a couple of websites(including for my work). I'd like to have a backup web server that people would hit when my server goes down. My primary host is on my companies T1 line and even though I've had my server die once the most common reason for my sites to be offline is that our T1 goes down. I've looked at the High-Availability Linux Project but it seems that almost everything there is for failover using ip takeover which isn't an option if my network link dies and my backup server is on a different network. ZoneEdit seems to offer what I'm looking for but I'm wanting a do it myself solution. The only software I've found is Eddie and it seems to have stopped development around 2000. I know DNS based failover doesn't give 100% uptime but with a low cache time and decent monitoring it seems like it's the best solution for having my backup server at a differnt location and on a differnt network. Anyone know of a good solution? (Using Linux and/or Solaris hosts)"
Dyndns (Score:2, Interesting)
Re:Dyndns (Score:2, Informative)
They will let you configure custom TTL values on A (host) records. I set mine to 5 minutes and it works just fine.
There are some automated engines out there which will update the dyndns
Re:Dyndns (Score:1)
Depends whether you want to pay for it . . . (Score:5, Informative)
However if you do actaully have a budget to spend have a look at the 3DNS product from F5 [f5.com] Networks. it does the failover you describe and although it works better if it is intereacting with F5's server load balancing product, it can still monitor and react to standard web servers becoming unavailable.
Re:Depends whether you want to pay for it . . . (Score:1)
uhhhh (Score:3, Informative)
Re:uhhhh (Score:2)
Seriously, our main reason to go with a T1 instead of business DSL is because a T1 comes with a guaranteed QoS. We had our T1 line become slow, and they had a tech come over at 4am on a Sunday to fix it. And he was *really* good. (Sprint, in case you're wondering.)
Of course, you can never completely avoid backhoes.
Re:uhhhh (Score:3, Informative)
backhoes are easy to fix, I remember when I worked at Mindspring (pre-Eart
Re:uhhhh (Score:2)
This is with Covad (resold by uunet) and with Rhythms (After they were bought out by uunet).
At the same time, lightning-caused damage and power outages have caused several week-long outages... but when nobody in the neighborhood has electricity for a week it's hard to complain about your
A few ways.. (Score:5, Informative)
2. You will need a second line. Mandatory. If you really want insane uptime, you'll need dynamic routes ala BGP from both ISP's. If you don't need that much, you could maybe work with an automated probe-and-dnsupdate script which can run outside the network. It would switch the primary DNS to and from the backup IP address which is on the isolated network.
3. Have an equalized DNS entry for both IP addresses. It gives the client a 50% chance of connecting once its dead, but its better than nothing.
4. Tell the site visitors to connect to www1.mysite.com if they're having troubles reaching your site and have www1 pointing to your backup IP. Make sure your DNS servers are network redudant as well, or the whole excersize is pretty pointless.
Re:A few ways.. (Score:1)
Why not write a little redirect php script that is hosted somehwhere with mad uptime - that script would ping both hosts and direct the user to the one which responds quickest. -
of course if the php script machine went down you'd be toast but....
no idea if this works in large volumes, but we use something similar for client side redirects...
Re:A few ways.. (Score:1)
More to the point, have the browser 'stick' to the server they initially connect to. In other words have the www1.mydomain.com server's content contain references only to www1.mydomain.com, and not www.mydomain.com (and similarly for the content on www2). Otherwise you'll have 50% of all links/IMG tags and so on fail rather than just 50% of all initial connections.
You could always use IPv4 Anycasting. (Score:3, Informative)
Re:You could always use IPv4 Anycasting. (Score:2)
Re:You could always use IPv4 Anycasting. (Score:2)
Linux server hacks and the slashdot-effect... (Score:2, Interesting)
Don't know if it works for your setup.
My favorite quote:
If you serve a particularly popular site, you will eventually find the wall at which your server simply can't serve any more requests. In the web server world, this is called the Slashdot effect, and it isn't a pretty site (er, sight)
RFC 2136 + Net::DNS + your monitoring software (Score:4, Informative)
First, you need to have a monitoring system on the Internet somewhere, not through your T1 because if that goes down it won't be able to update your DNS. You have that already, I'm sure, to test your web site accessibility from the Internet. Of course, at least one of your name servers must be accessible when the T1 goes down too, so that will have to be somewhere (other than on your T1) on the Internet as well.
On this name server enable dynamic updates. Modify your monitor system that checks availability of your site to use Net::DNS to update the IP address of your web server when the monitor fails.
Going all open source, I'd use Net::DNS and nagios for the monitoring software, bind for the name server (which supports dynamic updates), with Linux as the OS.
Re:RFC 2136 + Net::DNS + your monitoring software (Score:4, Informative)
with Linux as the OS
Kernel! And anyway, does the fact you're using GNU/Linux or *BSD actually make a difference to this?
Re:RFC 2136 + Net::DNS + your monitoring software (Score:2)
Re:RFC 2136 + Net::DNS + your monitoring software (Score:3, Informative)
The key is that I use tcpclient from DJBs ucspi-tcp package:
http://cr.yp.to/ucspi-tcp.html
Don't hurt yourself with BIND, either. Parsing that file is going to hurt your brain. I use grep -v to manage my data file for tinydns:
http://cr.yp.to/djbdns.html
Maybe I'll get around to publishing my work. A brief synopsis:
I do a tcp connection to port 80 on my webservers with a 5 second timout. If the connection fails it pulls all IPs assoicated with that server out of my
Re:RFC 2136 + Net::DNS + your monitoring software (Score:3, Informative)
I'm looking for something similar... (Score:2)
If I had multiple servers, could I keep them in sync with rsync? Or is there a better way?
Supersparrow (Score:2)
It depends.... (Score:3, Insightful)
Then you need to look at your services you're offering from your website, is it all static, session-based or what?
Combine the two to figure out how much your downtime is going to actually cost you. For example, if my personal site, which is static, is down for 5 hours the only person who is going to really care is me. And I don't pay myself much.
Flipside, on an ecommerce site with shopping cart, that 5 minutes of downtime could cost a lot of lost sales.
In otherwords, your redundancy plan should match how much you think you'll lose if Bad Things Happen.
Now, you're on a T1 with some personal stuff, let's assume 5 minutes is fine, money lost is minimal, but any more time will be irritating. Your content is static. Here's a cheap DIY solution and yes it's DNS based.
Setup identical webservers on seperate networks. Have those servers also be the nameservers for the website in question. Configure each webserver to only answer an A query as itself. The ttl for the A record needs to be low (5-10 minutes). Now, if one of the servers/networks goes down, clients can only resolve DNS by reaching a server; server down, can't query it, they'll hit the other server.
This method has some downsides, as mentioned bandwidth usage will be higher as more DNS queries will be made. Session-based stuff also won't work, no guarantee which server any given request will hit.
see p2pweb.net (Score:3, Interesting)
The site is distributed on 4 web servers : 3 on ADSL lines, one on SourceForge. I use 3 independant DNS to announce the web site. On each DNS I also run NAGIOS to monitor each web site. When one of the web site goes down (or up) a special handler (in perl) is called by NAGIOS and dynamicaly update the DNS entry
see global Load balancing [p2pweb.net] for more details and code examples (in french only, but I am working on an English translation).
I set up the DNS TTL to 300 seconds, and NAGIOS can detect a state change in 2 or 3 minutes. So I can have global fail over in less than 10mn.
I have the system running for some month, and it works very well.
It's a king of "poor man's" akamai.
We tried it, and it didn't work. (Score:1, Informative)
Multiple Master Name Servers (Score:4, Insightful)
Colocation facility 1 machine gets named "DNS1.domain.com" and is a reverse proxy to your real site. Colocation facility 2 machine gets named "DNS2.domain.com" and is also a reverse proxy to your real site. Add cache content sharing between these two servers for extra availability.
You will also be adding DNS servers to each one of those colocated servers. They run as masters (not slaves). The contents of the zones will make each server the single point of contact for your content.
With this setup the following happens when users request your content
Browsers requests DNS lookup.
Client name server queries all the DNS servers for that domain for the request. First response wins.
Browser contacts your colocation server for content.
Colocation server checks its cache of your site.
if content does not exist, it will ask the cache partner for content, and then will query the real site.
Real site serves content to the proxy server at a much reduced rate.
right way, but expensive (Score:2)
But, as others have mentioned, if you already have a T1 it shouldn't be down much. If it is, you're better off changing providers. Setting your DNS TTL low is a hack that will subsume quite a bit of bandwidth.
I've done it (Score:3, Interesting)
Basics are:
(1) you need a heart beat to confirm the master machine is running.
(2) You write a simple script using dnsupdate(8) [sourceforge.net] that removes your master and inserts the backup.
(3) You look up the special magic to tell DNS caching to flush on other machines.
Don't use DNS failover. (Score:4, Informative)
Then again, if it dosn't matter to you, don't worry about it. Just do RR-DNS and manually cut out the failed IP. "most" people will get the still-working servers.
djbdns (Score:1)
Load Balance your DNS servers! (Score:2)
The DNS server would fail and because of an unpublished bug in Windows 2000 where the secondary DNS server assigned to the NIC wouldn't be used and lookups would fail in large numbers if the primary server went down.
Load Balancing Multiple Unix Based DNS servers over UDP did the trick!
Dolemite
_________________
Build it redundant to start with (Score:2)
This complicates the back-end if you have a database driven site, but you were going to have to deal with that anyway.
The "quick and dirty" way to do this is a round-robin DNS CNAME entry that sends traffic from your usual name "www.whatever.com" to "www1.whatever.com" and "www2.whatever.com".
Keep your TTL/update times low and if you know ww
No-IP.com's monitoring service (Score:1)
For the price its not bad (yearly subscription). Check it out here: http://www.no-ip.com/services.php/page/monitor_ad v anced
It isn't DIY, but I couldn't find anything that could easily achieve this with on
Wicked-ass DNS!! (Score:1)
If you've got the budget then you should check out the Adonis DNS server from BlueCat Networks. The Adonis is hands-down the best DNS server on the planet. It offers high-availability, redundancy, high-security data transfers, etc. It has a military-style flash disk option so that there is no moving parts that will fail (especially hard drives these days), etc. Kick-ass BIND support!!
Disclaimer: I used to work there and parted ways rather involuntarily. However, the Adonis DNS is one mean-ass, rock-soli
I'll write failover code for you (Score:1)