Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Best Solution For HA and Network Load Balancing?

kdawson posted more than 5 years ago | from the cluster-wisdom dept.

Supercomputing 298

supaneko writes "I am working with a non-profit that will eventually host a massive online self-help archive and community (using FTP and HTTP services). We are expecting 1,000+ unique visitors / day. I know that having only one server to serve this number of people is not a great idea, so I began to look into clusters. After a bit of reading I determined that I am looking for high availability, in case of hardware fault, and network load balancing, which will allow the load to be shared among the two to six servers that we hope to purchase. What I have not been able to determine is the 'perfect' solution that would offer efficiency, ease-of-use, simple maintenance, enjoyable performance, and a notably better experience when compared to other setups. Reading about Windows 2003 Clustering makes the whole process sounds easy, while Linux and FreeBSD just seem overly complicated. But is this truly the case? What have you all done for clustering solutions that worked out well? What key features should I be aware for successful cluster setup (hubs, wiring, hardware, software, same servers across the board, etc.)?"

cancel ×


Sorry! There are no comments related to the filter you selected.

1000+ a day isn't very much (5, Insightful)

onion2k (203094) | more than 5 years ago | (#27038167)

1000+ unique visitors is nothing. Even if they all hit the site at lunchtime (1 hour window), and look at 30 pages each (very high estimate for a normal site) that's only 8 requests a second. That isn't a lot. A single server could cope easily, especially if it's mostly static content. As an example, a forum I run gets a sustained 1000+ users an hour and runs fine on one server.

As for "high availability", that depends on your definition of "high". If the site being down for a morning is a big problem then you'll need a redundant failover server. If it being down for 15 minutes is a problem then you'll need a couple of them. You won't need a load balancer for that because the redundant servers will be sitting there doing nothing most of the time (hopefully). You'll need something that detects the primary server is offline and switches to the backup automatically. You might also want to have a separate database server that mirrors the primary DB if you're storing a lot of user content, plus a backup for it (though the backup DB server could always be the same physical machine as one of the backup webservers).

Whoever told you that you'll need as many as 6 servers is just plain wrong. That would be a waste of money. Either that or you're just seeing this as an opportunity to buy lots of servers to play with, in which case buy whatever your budget will allow! :)

1000+ a day is trivial have you thought of amazon? (5, Insightful)

MosesJones (55544) | more than 5 years ago | (#27038259)

Lets get more blunt. Depending on what you are doing and if you want to worry about failover then 1000 a day is bugger all. Simple set up of Apache and Tomcat (if using Java) with running round-robin load-balancing will give you pretty much what you need.

If however you really are worried about scale up and scale down then have a look at Amazon Web Services as that will probably more cost effective to cope with a peak IF it occurs rather than buying 6 servers to do bugger all most of the time.

2 boxes for hardware failover will do you fine, if you are worried about HA the its the COST of downtime that you are worried about (i.e. down for an hour exceeds $1000 in lost revenue) which will justify the solution. Don't just drive availability to five nines because you feel its cool, do it because the business requires it.

Aluminum Dildos (or is it Dildoes?): SFW Link (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27038839)

Seen at Craigslist [] ...what do you think??

Re:1000+ a day is trivial have you thought of amaz (5, Informative)

rufus t firefly (35399) | more than 5 years ago | (#27039043)

There are a number of nice load balancers out there which are opensource. I'm partial to HAproxy, but you could try:

HAproxy (which is the one I use) has the ability to define "backup" servers which can be used in the event of a complete failure of all servers in the pool, even if there is only one server in the main pool. If you're trying to do this on the cheap, that may help. It also has embedded builds for things like the NSLU2, so it may be easy to run on an embedded device you already have.

Re:1000+ a day isn't very much (4, Informative)

drsmithy (35869) | more than 5 years ago | (#27038279)

You'll need something that detects the primary server is offline and switches to the backup automatically. You might also want to have a separate database server that mirrors the primary DB if you're storing a lot of user content, plus a backup for it (though the backup DB server could always be the same physical machine as one of the backup webservers).

On this note, if you're comfortable (and your application is compatible) with Linux+Apache, then heartbeat [] and DRBD [] will do this and are relatively simple to get up and running. Just avoid trying to use the heartbeat v2-style config (for simplicity), make sure both the database and apache are controlled by heartbeat, and don't forget to put your DB on the DRBD-replicated disk (vastly simpler than trying to deal with DB-level replication, and more than adequate for such a low load).

Oh, and don't forget to keep regular backups of your DB somewhere else other than those two machines.

Re:1000+ a day isn't very much (1)

wisty (1335733) | more than 5 years ago | (#27038805)

Backup your db. Test your db backup. Get someone else to check your backup strategy. That's mission critical, and it merits repeating.

1000 users a day? Windows can start about 10 Python processes a second (and handle a bit of processing within that process), which is probably the slowest way you could possibly do it. OSX or Linux can do 10 times as much.

Re:1000+ a day isn't very much (5, Informative)

Mad Merlin (837387) | more than 5 years ago | (#27038281)

I agree that 1000 unique visitors is peanuts, but as for how to do HA, it really depends a lot on your situation. For example, the primary server for Game! [] started acting up about 2 weeks ago, but it mattered little as I was able to flip over to the backup server and came out with barely any downtime and no data loss. In the mean time, I was able to diagnose and fix the primary server, then point the traffic back at it. In my case, all the dynamic data is in MySQL, which is replicated to the backup server, so when I switched over I simply swapped the slave and the master and redirected traffic at the backup server. You also have to consider the code, which you presumably make semi-frequent updates to. In my case, the code is stored in SVN and updated automagically on both the master and the slave simultaneously.

Having said all that, there's more to consider than just your own hardware when it comes to HA. What happens if your network connection goes down? In most cases, there's nothing you can do about it except twiddle your thumbs while you wait on hold with customer service. Redundant Internet connections are expensive due to the fact that you basically need to be in a big (and expensive) colocation facility to get it.

Also, how easy it is to have HA depends largely on how important writes are to your database (or filesystem). Does it matter if this comment doesn't make it to the live page for a couple seconds after I hit submit? No, not really. Does it matter if I change my equipment in Game! [] but don't see the changes immediately? Yes, definitely. Indeed, if your content is 100% static, you can just keep a dozen complete copies and put a load balancer in front that pulls dead machines out of the loop automagically and be done with it.

Re:1000+ a day isn't very much (2, Informative)

Anonymous Coward | more than 5 years ago | (#27038345)

Definitely. I had a site that was doing ~2000+ unique per day, used considerable bandwidth (lots of images). However, everything was heavily cached (no on-demand dynamic pages). And it was running on all on an old P4 and 512MB of RAM with fantastic response times and zero issues.

Re:1000+ a day isn't very much (0)

Anonymous Coward | more than 5 years ago | (#27038381)

I agree, HA is overkill for 1k user. I run FOSS and Linux community & forum website which gets between 2.0 to 2.2 million unique page views per month (roughly 70k unique a day). The site is severed from a dual XEON box + 4GIB ECC SDRAM and RAID10 x 250GB SATA (it costs me $350pm). The server bill is paid via adverts and donations. I do not use Apache. I use lighttpd+fastcgi+php5+mysql5 combo.

You do not need HA, go and select load balanced shared hosting provided by Yahoo! or other providers. It may cost just $20 pm.

Also, do not go and post to forum such as WHT & clones, they are run and owned by webhosting companies. They will always give advice to milk OP.


Re:1000+ a day isn't very much (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27038797)

Also, if you're doing M4M, try out A2M. Yes, I ANAL :)

Re:1000+ a day isn't very much (1)

Zocalo (252965) | more than 5 years ago | (#27038465)

The poster doesn't make any indication of how much traffic each of those "1,000+ visitors a day" will generate, either in terms of the number of requests or the number of bytes. Nor is any indication given as to the nature of the service, required resiliance or the method of information exchange provided. For a simple HTML form, back-end DB based system without high uptime requirements, then the required infrastructure is trivial, but if we're going to the opposite extreme and talking about five nines uptime, extended voice conversations, or even video conferencing, with large file downloads (FTP was mentioned) as well... Admittedly, that's unlikely for a non-profit, but it's kind of hard to extrapolate anything other than generics from the information currently available.

Assuming that it wasn't butchered by the Slashdot editors, it's a very poorly thought out submission, IMHO.

Re:1000+ a day isn't very much (0)

Anonymous Coward | more than 5 years ago | (#27038497)

how much money have you got for this project friendo? If we are talking bailout money, I would recommend IBM's HACMP which means you will need IBM power5 or 6 servers midrange would do. Or..

Veritas cluster server and you can put it on anything {hopefully not windows} but if you really want something with fault tolerance you need SUN or IBM equipment.

Enjoy... I hope you have an ulcer doing it.

Re:1000+ a day isn't very much (0)

Anonymous Coward | more than 5 years ago | (#27038593)

I've got a website which was once served without problems to 30000 visitors in one day from a shared hosting account that cost $5 a month. Unless you're doing some heavy work on the server, 1000 visitors per day is child's play. If you're doing heavy work on the server, are you sure that your application is ready to be load-balanced? (BTW, if somebody needs a ballpark figure for a Digg front page appearance, that's where most of those 30000 visitors came from.)

Re:1000+ a day isn't very much (5, Informative)

Xest (935314) | more than 5 years ago | (#27038649)

I was thinking along the same lines.

But to the person asking the question, if you want a full answer then you need to get your site built and make use of stress testing tools such as JMeter for Apache or Microsoft's WAS tool for IIS.

It's not something anyone here can give you a definite answer for without knowing how well your site is implemented and what it actually does.

Look into Transaction Cost Analysis, that's ultimately what you need here, a good start is this article: []

or this one: []

Don't worry that these are MS articles on MS technologies they both still cover the ideas that are applicable elsewhere.

Even though no one here can give you a full answer for the above mentioned reasons, we can at least give you our best guesses and this is where I think the parent poster is spot on, 6 servers is absolute overkill for this kind of load requirements and indeed, unless your application does some pretty intensive processing I see little reason why a single server couldn't do the trick or at least a web/application server and a database server at most.

For ensuring high availability you may indeed need more servers of course and as you mention a requirement for FTP is bandwidth likely to be an issue?

The fact you're only expecting 1000 a day suggest you're not running the biggest of operations and although it's nice to do these things in house it may just be worth you using a hosting provider with an acceptable SLA, at the end of the day they have more experience, more hardware, more bandwidth and can probably even do things a fair bit cheaper than you can. Do you have a generator to allow continued provision of the service should your power fail for an extended period for example? If you receive an unexpected spike in traffic or a DDOS do you have the facility to cope with and resolve that like a big hosting company could?

There are many things I wouldn't ever use an external hosting provider for, but this doesn't sound like one of them.

Re:1000+ a day isn't very much (1)

lancejjj (924211) | more than 5 years ago | (#27039099)

YOU have a good rule-of-thumb analysis there. I like it, and it should apply to most normal sites.

Cluster F .... (-1, Troll)

Anonymous Coward | more than 5 years ago | (#27038169)

First Cluster ...F.....

Is It Mission Critical? (4, Insightful)

s7uar7 (746699) | more than 5 years ago | (#27038181)

If the site goes down do you lose truck loads of money or does anyone die? Load balancing and HA sounds a little overboard for a site with a thousand visitors a day. A hundred thousand and you can probably justify the expense. I would probably just be looking at a hosted dedicated server somewhere for now.

Re:Is It Mission Critical? (2, Insightful)

cerberusss (660701) | more than 5 years ago | (#27038217)

Well a dedicated server requires maintenance. All my customers come to me saying that they will eventually get 100,000 visitors per day. I make the calculation for them for the monthly cost: $100 for a decent dedicated server, plus $250 for a sysadmin etc.

Eventually they all settle for shared hosting except when privacy is an issue.

Re:Is It Mission Critical? (5, Interesting)

Errtu76 (776778) | more than 5 years ago | (#27038369)

It's not overboard. And even with a hosting provider you're still dependent on hardware problems. What you can do to realise what you want is:

- buy 2 cheap servers with lots of RAM
- set them up as XEN platforms
- create 2 virtuals for the loadbalancers
- setup LVS (heartbeat + ldirectord) on each virtual
- create 4 webserver virtuals, 2 on each xen host
- configure your loadbalancers to distribute load over all webserver virtuals

And you're done. Oh, make sure to disable tcp_checksum_offloading on your webservers, else LVS won't work that well (read: not at all).

Re:Is It Mission Critical? (5, Informative)

drsmithy (35869) | more than 5 years ago | (#27038641)

And you're done. Oh, make sure to disable tcp_checksum_offloading on your webservers, else LVS won't work that well (read: not at all).

Just a heads-up for those who (like me) read this and thought: "WTF ? LVS works fine with TOE", it is a problem specific to running LVS in Xen VMs where the directors and realservers share the same Xen host. Link. []

Re:Is It Mission Critical? (4, Informative)

alta (1263) | more than 5 years ago | (#27038829)

If I had mod points, I'd give. This is the same thing we did, just different software.
-get 2 ISP, I suggest different transports. We have one as fiber, the other is a T1. There's no point in getting 2 T1 from different companies if a bulldozer cuts them together.
-Two dell 1950's
-Set each up with vmware server
-created 2 databases, replicating to each other
-Created 2 web servers, each pointing at database on same machine
-installed to copies of Hercules load balancer, vrrp + pen
-set up failover DNS with 5 minute expiration.

Now, you may say, why the load balancers if you're load balancing with DNS? Because if I have a hardware/power failure that's one instance where the 5 minutes for DNS to expire will not incure downtime for my customers. It also gives me the ability to take servers offline one at a time for maintenance/upgrades, again with no dowtime.

I have a pretty redundant setup here and the only thing I've paid for is the software.

Future plans are to move everything to Xenserver.

Re:Is It Mission Critical? (0)

Anonymous Coward | more than 5 years ago | (#27038395)

I would probably just be looking at a hosted dedicated server somewhere for now.

Yup. He should be looking at Amazon S3. 1000 visitors a day is peanuts, and whichever shyster came up with a figure of six servers for that sort of load sounds like a Dell or Microsoft sales person!

budget? (5, Insightful)

timmarhy (659436) | more than 5 years ago | (#27038189)

you can go as crazy as you like with this kind of stuff, but given your a non profit i'm guessing money is the greatest factor here. my reccomendation would be to purchase managed hosting and NOT try running it yourself. folks with a well established data centre that do this stuff all day long will do it much better,quicker,cheaper than you will be able to.

there is also more of them than you can poke a stick at and prices are very reasonable. places like rackspace for this kind of thing for $100/mo.

the other advantage is you don't need to pony up for the hardware.

Re:budget? (2, Insightful)

malkavian (9512) | more than 5 years ago | (#27039007)

The problem being that you're paying $100 per month in perpetuity. Sometimes you get awarded capital to spend on things in a lump sum, whereas the ability to garner a revenue commitment could not necessarily be made.
At the spend rates you mentioned, that's a basic server per year. Say the server is expected to last 5-8 years, that'll be an outlay of at least $6000-$9600+, with more to spend if you want to keep things running.
That would cover the cost of a couple of generations worth of hardware, depending on how it was implemented.
If there's no skill around (and definitely won't be), then by all means, the revenue based datacentre rental is a great move, but if there is skill around to perform a task, then you gain far greater flexibility by DIY.

Guess a fair bit of this comes down to whether it's possible to get at least $6k+ allocated to revenue spend over the next 5 years (at today's prices), of if it has to be capital.

Pound (3, Informative)

pdbaby (609052) | more than 5 years ago | (#27038203)

At work we have a pretty good experience with Pound - it's easy to set up & it load balances and will detect when one of your servers is down and stop sending traffic there. You can get hardware load balancing from people like F5 too.

If you're just starting out you'll probably want to start with software and then, if the load demands it, move to hardware

Machine-wise, we use cheap & not overly powerful 250 GBP, 1u servers with a RAID1; they'll die after a few years (but servers will need to be refreshed anyway) and they provide us with lots of options. They're all plugged into 2 gigabit switches

Re: 800 Bucks to Spend (1)

buswolley (591500) | more than 5 years ago | (#27038261)

I am a graduate student who wants a little extra computing power for scientific analysis work.

I have a small budget. 800 bucks.

I have heard of this guy building a microwulf cluster, [] that generated some good flops, at least at that time. Today I can build that very same cluster for about 800 dollars.

My question: Is it better to go with a newer computer setup that falls within that budget, or go with the cluster. I will be doing image analysis work of function MRI data. Thanks.

Re: 800 Bucks to Spend (1)

drsmithy (35869) | more than 5 years ago | (#27038685)

My question: Is it better to go with a newer computer setup that falls within that budget, or go with the cluster. I will be doing image analysis work of function MRI data. Thanks.

While I'm not an expert on the topic by any means, I would expect for that sort of budget you'll get far better performance out of a single a machine, than any cluster you could build for the same cost.

Even if your interest is in testing how "cluster friendly" your code is (eg: for scaling considerations), you'll almost certainly still get the best performance/$ with a single quad-core machine running $CORE_COUNT VMs to "simulate" a cluster (with each VM bound to a specific CPU core).

I just can't see why you would want to venture into the cost inefficiencies of multiple machines until you _had_ to be cause a single machine wasn't fast enough - and you can fit a *lot* of power into a single computer these days.

Re: 800 Bucks to Spend (1)

Siffy (929793) | more than 5 years ago | (#27038771)

At that price point the real question is a basic one, do you want to build a cluster? If yes, I wouldn't build that exact setup but probably go with Athlon X2 5050e CPUs. You can also get used 1U dual cpu servers on ebay and sites like almost all day long for $100-150 each. They did have a bunch on this page: [] but are currently sold out of the dirt cheap stuff. The downside of the pre-built older stuff is they'll cost more in electricity to run. Now, if you answered "No, I don't really just want to build a cluster for fun." then your best bet will be to just build an i7 based machine. With the cluster you'd be able to afford max 6 nodes with 2 cores each that will be individually slower than the i7's cores. With the i7 you'd only have 8 (logical) cores but they'd be faster and overall draw less power (cheaper to operate) than the 12 core cluster. If the application you're working with can truly be threaded easily enough to take advantage of an 8-12 cpu cluster you should look into porting it to run on a GPGPU. And that's if there's not already code to do it. A lot of scientific functions are already available written in CUDA. You can get a ton of performance out of a $200 video card if the application can be parallelized.

Re: 800 Bucks to Spend (1)

lowtek77 (896266) | more than 5 years ago | (#27038827)

If you have to pay for power and/or have to deal with the environmental aspects of living/working near the multiple machines (heat, noise, etc.), then I would also suggest a single box.

HaProxy (4, Informative)

Nicolas MONNET (4727) | more than 5 years ago | (#27038315)

Haproxy [] is better than Pound, IMO. It's lightweight, but handles immense load just as well as layer 3 load balancing (LVS), with the advantages of layer 5 proxying. It uses the latest Linux APIs (epoll, vmsplice) to reduce context switching and copying to a minimum. It has a nice, concise stats module. Its logs are terse yet complete. It redirects traffic to a working server if one is down / overloaded.

Re:HaProxy (2, Informative)

Architect_sasyr (938685) | more than 5 years ago | (#27038795)

I seem to recall slashdot operating behind pound systems. It was a good enough plug for me to go and fire it up, been happy with it ever since. Not to say haproxy is better or worse, I've never used it, just another person with great results from pound.

We get upwards of 15,000 hits per hour and just use Carp and Pound to handle our redundancy (Carp captures servers down, pound handles TCP ports going missing) across two machines (both RAID5 with FA RAM). Last time I checked the load averages, the 2.2 G processors were doing ~1.28 for a highly dynamic site.

Re:Pound (1)

dotwaffle (610149) | more than 5 years ago | (#27038401)

Where do you get your 250GBP servers from? And do they have hot-swap drive bays? =)

Load balancing (1)

blake1 (1148613) | more than 5 years ago | (#27038209)

I would think that this would also largely depend upon what you are using to serve the pages people are going to be accessing. If you are using IIS as a web server (I'm assuming this is not the case) then the NLB component of Windows is already there ready to be turned on. This will provide fault-tolerance and load balancing for the front-end but if you have databases then these will also need redundancy for your service to be HA (MS have failover clusters for this purpose). I've found MS implementations of load-balancing / HA to be simple and effective if they are implemented properly.

Plan or Implementation? (5, Insightful)

Manip (656104) | more than 5 years ago | (#27038215)

Why are you purchasing six or so servers before you even have one online?

You say that you expect "1,000+ a day" visitors which frankly is nothing. A single home PC with Apache would handle that.

This entire posts strikes me as either bad planning or no planning. You're flirting with vague "out of thin air" projections that are likely impossible to make at this stage.

Have a plan in place for how you will scale your service *if* it becomes popular or as it becomes popular but don't go wasting the charities money just in case your load jumps from 0 to 30,000+ in 24 hours.

Re:Plan or Implementation? (1)

fl!ptop (902193) | more than 5 years ago | (#27038753)

don't go wasting the charities money

not to nitpick, but not all non-profits are charities, and some non-profits have a lot of money to spend. case in point []

Re:Plan or Implementation? (1)

morgan_greywolf (835522) | more than 5 years ago | (#27039143)

Agreed. Actually, something a lot of people aren't mentioning is that 1,000 unique vistors is sufficiently low enough to have someone else do your hosting. You might even be able to get by with shared hosting, but if you want something more reliable, virtual servers give you the reliability of a dedicated server sitting in a co-loc, the room to expand later because you pay for only what you use, which results in a cost that's much less than running your own server, if a bit higher than shared hosting.

Do your homework on virtual hosting. Not all virtual servers are created equal, and some are significantly more expensive than others. But since lots of companies are providing this now, you'll have a wide array to choose from.

One important thing, as the parent says: know your requirements. Do not guess. Draw up a plan and get required features and service levels in writing. And always get service level agreements from the hosting company in writing and make sure they match your requirements, of course.

F5 (1, Interesting)

Anonymous Coward | more than 5 years ago | (#27038229)

Your application is very simple, and your budget probably is not too high. But for your own edification, this is F5 Networks (formerly F5 Labs) bread and butter, application delivery. What you want is a pair of BIG-IPs running Local Traffic Manager. You should look into that, at least so you can show off how cheap the solution you propose to your boss is to it.

Re:F5 (0)

Anonymous Coward | more than 5 years ago | (#27038291)

instead throw big money to pricy F5 or similar machines, get some 2 dirt cheap servers, with quality ethernet cards, add some linux distro with apache(with cache+proxy+loadbalancer modules) and heartbeat(for failover) and have some active/active LB setup for MUCH lower price. been there/done that. also will learn much in process which is even more valuable.

Re:F5 (0)

Anonymous Coward | more than 5 years ago | (#27038303)

um no. F5's will be overkill. I dont think they are prepared to fork out 30k in a HA pair of loadbalancers.

to be honest i think scalability is going to be the issue here. They havent mentioned how they are going to grow hence a rackspace type provider maybe the right way forward however We have found rackspace to be limited. Another reason why a managed service provider will be the way forward as TCO probably will be a major factor here. Charities dont have massive budget for capex.

You will be OK (-1, Troll)

bogaboga (793279) | more than 5 years ago | (#27038233)

With today's server hardware and just 1000+ unique visitors, you will be fine. Just make sure you have lots of RAM, in excess of 16GB.

Re:You will be OK (4, Insightful)

Anonymous Coward | more than 5 years ago | (#27038307)

16GB? Are you mad? Anything beyond 1GB should be enough to handle 1000 unique visitors per day. If you want to virtualize the system and have a separate web- and database server, 2GB should be enough already, if you ant to go further and have a separate virtual mail server in there, 2GB is still sufficient and 3GB is plenty.

Re:You will be OK (2, Interesting)

Reigo Reinmets (1035336) | more than 5 years ago | (#27038321)

Depending on static/PHP/Python/WhatEverYouUse engines, i think 16GB is a bit overkill for 1000+ users per day, but it all depends on the application ofcourse.

Re:You will be OK (0)

Anonymous Coward | more than 5 years ago | (#27038385)

Yes, because you know exactly the memory footprint of the application running, well whichever OS, and you know it'll scale in a predictable way. Today's server hardware[tm] (which just comes in one model) doesn't really care about the CPU intensiveness of the application. As we all know, 1000+ means exactly 1001-1005, not for example 172513, and it really makes no difference whether it's evenly spread out during the day or all of them connect at 4:20:00 EST. Just buy lots of RAM, case closed.

Mod parent troll (3, Funny)

MadMidnightBomber (894759) | more than 5 years ago | (#27038577)

Obviously has shares in Kingston.

(16Gb RAM for 1k visitors? What kind of pages are you serving?)

Re:Mod parent troll (1)

Barny (103770) | more than 5 years ago | (#27039017)

Grandparent neglected to point out that his chosen setup was Windows Vista Ultimate running with windows Virtual PC, and 3 copies of Vista Ultimate (2 running appache and one as DB server), it needs 16GB just to boot and serve 1 visitor...

Capacity planning (1)

Bozovision (107228) | more than 5 years ago | (#27038841)

Measure the memory cost of your web application. Suppose that it's PHP and a session takes 35MB, then you need 35MB for the duration of servicing the request. With 1000 visitors a day, if they all visit during lunch hour, and they are each looking at 10 pages, you'll have about 2.7 requests per second on average.

This means that on average you'll need another (35MB + database overhead + Apache overhead) x 2.7 memory per second. If page generation lasts an astoundingly long 2 seconds, you'd have about 6 sessions stacked up before you recovered the memory used by the first session in the queue. Assuming that you need 10MB for Apache + database, you'd need all of 270MB + OS footprint to run your server.

I think we can safely say that 16GB is overkill under these circumstances.

Of course if it's lunch hour, your peak (which is the important thing) would be higher: maybe 50% people would hit in the first 15 minutes of the hour. You need to do capacity planning which is appropriate for the load and the technology you are using.

By contrast: one of my sites had 15 minutes of fame, and had 20,000 page views across about three hours. It was running as static content, from a Xen instance, with 1GB of memory, and about 25% of processor time on a dual processor 1GHz system. There wasn't even a hiccup in dealing with the load.

OpenBSD (0)

Anonymous Coward | more than 5 years ago | (#27038263)

Consider OpenBSD, CARP gives you the best clustering. Alternatively OpenBSD with relayd makes for the best load-balancer.

Re:OpenBSD (1)

Venture37 (654305) | more than 5 years ago | (#27038331)

+1 CARP is definitely the way forward for scenarios like this.

KISS (2, Insightful)

MichaelSmith (789609) | more than 5 years ago | (#27038265)

Sit down for a bit and think about the most likely use cases for your software. To give the example of slashdot that might be viewing the main page or viewing an entire article. Structure your code so that these things can be done be directly sending a single file to the client. With the kernel doing most of the work you should be okay.

Sites which get slashdotted typically use a badly structured and resourced database to directly feed external queries. If you must use a database put some kind of simple proxy between it and the outside world. You could use squid for that or a simple directory of static html files.

Some information about HA (3, Informative)

modir (66559) | more than 5 years ago | (#27038269)

I want to give you some more information. Based on your visitor estimates I think you do not have a lot of knowledge about it. Because for this number of visitors you do not really need a cluster.

But now to the other stuff. Yes, Windows clustering is (up to Win Server 2003 [1]) a lot easier. But this is because it is not really a cluster. The only thing you can do is having the software running on one server, then you stop it and start it on the new server. This is what Windows Cluster is doing for you. But you can not have the software running on both servers at the same time.

If you really want to have a cluster then you need probably some sort of shared storage (FibreChannel, iSCSI, etc.). Or you are going to use something like DRDB [2]. You will need something like this too if you want to have a real cluster on Windows.

I recommend you to read some more on the Linux HA website [3]. Then you get a better idea what components (shared storage, load balancer, etc.) you will need within your cluster.

If you only want high availability and not load balancing then I recommend you to not use Windows Cluster. Better set-up two VMware servers with one virtual machine and then copy a snapshot of your virtual machine every few hours over to the second machine.

[1] I don't know about Win Server 2008
[2] []
[3] []

Re:Some information about HA (0)

Anonymous Coward | more than 5 years ago | (#27038463)

We use Heartbeat and DRDB for our linux authentication and file servers. It's a little annoying to set up for the first time (since it's got to synchronize the block devices, and if you have multi TB you're looking at 1-3 days), but it's near instant fail over.

Re:Some information about HA (2, Informative)

blake1 (1148613) | more than 5 years ago | (#27038675)

The only thing you can do is having the software running on one server, then you stop it and start it on the new server. This is what Windows Cluster is doing for you.

That's not true. For clustering of front-end services (ie, IIS) you use NLB which is fully configurable load balancing and fault tolerance.

Re:Some information about HA (2, Informative)

modir (66559) | more than 5 years ago | (#27038849)

True, sorry I did not write it that clear. I was only writing about the Cluster software included with Windows. Not about other applications like NLB included with Windows too.

I just wanted to make clear that Microsoft Cluster Server is a lot easier to set-up (what the questioner has seen correctly) but this is because you get a lot less. He would have to install and configure several other applications (like NLB) to get the same as he gets with Linux HA.

wrong on several counts... (3, Informative)

turbine216 (458014) | more than 5 years ago | (#27039081)

Windows clustering allows for Active/Active clusters, so you CAN run the same service on two cluster nodes at the same time (with the exception of Exchange).

Setting up two servers to host VMWare guests and copying is not a good idea either - the HA tools for VMWare are expensive, and totally unneccessary for the proposed deployment. Without these HA tools, he would have to down his primary guest every time he wanted to make a snapshot.

We're talking about a very simple deployment here - HTTP and FTP. You don't even need clustering or a dedicated load balancer - instead, try using round-robin DNS records to do some simple load balancing, and then use a shared storage area as your FTP root (could be a DFS share for Windows or an NFS mount in Linux). This would give you a solid two-server solution that works well for what you're trying to accomplish, and adding servers would be trivial (just deploy more nodes, and add DNS records to the list).

If it grows much larger than 2 nodes, you might consider an inexpensive load-balancer; Barracuda sells one that works well and will detect a downed node.

Clustering for this job is totally unnecessary though. You're wasting your time by looking into it.

What about Caos Linux (0)

Anonymous Coward | more than 5 years ago | (#27039115)

"The NSA-1.0 release identifies the stabilization and validation of the core operating system, fully tested on some of the world's fastest public and private systems and architectures. And now with NSA 1.0.8 you get bleeding-edge security updates, the new 2.6.28 kernel, updated packages such as OFED 1.4 and gcc-4.3.3, a streamlined Sidekick system configuration toolkit (making the installation of Caos Linux and Perceus even faster and easier), the latest Perceus 1.5 cluster management software, and Abstractual, Infiscale's cloud virtualization solution. All of these updates are already integrated in the NSA-1.0.8 ISO release of Caos Linux"

Nginx (1, Informative)

Tuqui (96668) | more than 5 years ago | (#27038285)

For LoadBalancing and statics file HTTP serving use Nginx, is the fastest around. Use two or more linux servers for your High Availability Cluster, set a virtual IP for the LoadBalancer and HeartBeat to switch the virtual IP in case of failure. Software cost including OS = zero.

1000 a day? Oh my! (0)

Anonymous Coward | more than 5 years ago | (#27038297)

1000 visitors per *day*? Oh my! That's almost one visitor every minute! Truly, this is traffic previously unheard of.

Amazon EC2 (2, Informative)

adamchou (993073) | more than 5 years ago | (#27038299)

Amazon's servers allow you to scale vertically and horizontally. They have images that are preconfigured to do load balancing and they have LAMP setups. Plus the fact that its a completely virtualized system means you never have to worry about hardware failures. with only 1k uniques per day, they have more than enough to accommodate for what you need

as for ease of use, i've never done windows load balancing, but the linux load balancing isn't terribly difficult to get working. to optimize it is quite a bit more difficult though. but with anything linux, its all terminal so its almost never as convenient as point and click. however, its almost always more flexible than point and click.

one other thing that you need to think about that goes hand in hand with HA systems is monitoring. with or without amazon, you need to always account for software failures too. apache might hang, the database might be overloaded, etc. you'll need something like nagios, cacti, etc. so don't forget to account for that in your hardware costs

Re:Amazon EC2 (1, Informative)

Anonymous Coward | more than 5 years ago | (#27038341)

Perhaps you might want to take a look at this:

It's an excerpt from a recent Undernet IRC session. So much for integrity and security on Amazon...

Only one kind of cluster for this (1)

ciaran.mchale (1018214) | more than 5 years ago | (#27038351)

Hey dude, it's just got to be a Beowulf cluster.

Preferably a russian one.

And don't forget to use low-profile car tires for extra performance.

Consider cloud? (1)

sjj698 (528987) | more than 5 years ago | (#27038365)

Have you considered any of the 'cloud' offerings? Amazon EC2 / Microsoft Azure could be an option, this will be able to give you scalability as am sure that your 1000+ visitors a day is a guess. You can then bring up some of your services and grow with demand. Your 6 servers, clustered with a load balancer will quickly get expensive. Give it a go :-) SJJ

1000 a day? surely you mean per second? (0)

Anonymous Coward | more than 5 years ago | (#27038383)

One dual quad Xeon properly configured can saturate 200Mbps, and serve 500 requests per second per GB of RAM installed easy. Most bad data centers configure their systems with only 1GB of ram fully aware that they can lease more systems to one client and much more profit than simply fine tuning the server.

Once you take into account the hardware bottlenecks (disk arrays)

Cluster systems are high-latency, better suited for "applications running on the server" over "static content"

Achtung sign recommended (1)

troll8901 (1397145) | more than 5 years ago | (#27038387)

Because the cluster setup is highly complex and fragile, you should hang a sign directly above the hardware.


This room is filled with special electronic equipment. Fingering and pressing the buttons from the computers are allowed for experts only! So all the "lefthanders" stay away and do not disturb the brainstorming happening here. Otherwise you will be out thrown and kicked elsewhere!

Also: please keep still and only watch the blinking lights in awe and astonishment."

we run a nonprofit with 100m+ visitors a day (5, Interesting)

midom (535130) | more than 5 years ago | (#27038391)

Hi! we run a non-profit website that gets 100 million visitors a day on ~350 servers. we don't even use any "clustering" technology, just replication for databases, and software (LVS) load balancer in front of both app (PHP) and squids at the edge. but oh well, you can always waste money on expensive hardware and clustering technology. and you can always check how we build things []

Re:we run a nonprofit with 100m+ visitors a day (1)

ledow (319597) | more than 5 years ago | (#27038605)

Heh, so assuming things scale linearly (which I would find surprising), you could run at least 1 million visitors per day on 3.5 servers. And this guy wants six servers for 1000/day (or a little over). And I don't think that his needs would run anywhere near as complex as the example posted. :-)

2-node failover solution is probably a net lose (1)

James Youngman (3732) | more than 5 years ago | (#27038393)

First, figure out what it means for your website to be available (do people need to be able to fetch a page, or do that also be able to log in, etc.). Select monitoring software and set it up correctly.

As for the serving architecture, at this level of load, you're better off without clustering. You don't need it for the load and it's probably a net loss for reliability; most outages I've seen in two-node cluster is either infrastructural that takes them both out (power distribution failures, for example) or problems with the HA system itself (switches going into jabber-protection mode and provoking a failover, failure detection script bugs, etc.). If you really feel that a single machine does not offer enough protection, go for an active-active configuration and simplify the problem to directing incoming requests to the working web servers, as opposed to "failing over".

This changes a bit if your reliability needs are high enough to justify separate serving facilities in separate data centres in different cities. For that sort of stuff you need to look at working with DNS to solve part of the problem too, but the right approach there depends on to what extent the website is static content.

Re:2-node failover solution is probably a net lose (1)

netcrusher88 (743318) | more than 5 years ago | (#27038851)

Actually 2-node active-passive can be a very good idea.

Let's say you have two nodes behind a load balancer (only way to replicate functionality active-active... you could do the thing where one server is static though, like youtube does). You need a shared filesystem, so you need another node to act as a NAS. What if your app is database-backed? You can stick that on the NAS, probably. But then it's not redundant.

It's really just simpler to have unidirectional replication, then script it to switch direction upon failover. The Linux-HA project makes it relatively easy, since they've been working on that for years.

I was running a local free (1)

sam0737 (648914) | more than 5 years ago | (#27038405)

and was handling like hundred thousands to a everyday, with off the shelf hardware spec 10 years ago. (Like 512M RAM and 1st era Pentium 4)

There was no problem at all.

We also used to handle load balancing the web requests, and using yet another bigger Linux NFS for backend storage.

The biggest problem for the HA is
1. How you sync the data over, or do you rely on another central storage which then there is single point of failure again.
2. If it involves Database, then it's is a much bigger issue...

I assume you don't need sub-second failover. 5 minutes downtime might even be OK. You might want to shoot for a Hot Standby solution, instead of Load Balancing solution, which should be a little bit easier on everything.

STOP. You have no idea what you're doing. (4, Interesting)

Enleth (947766) | more than 5 years ago | (#27038423)

I'm sorry, but I have to say that. Don't be offended, please - sooner or later you will look at your submission and laugh really hard, but for now you need to realise that you said something very, very silly. A few people already politely pointed out that 1000 visitors a day is nothing - but seriously, it's such a great magnitude of nothingness that, if you make such a gross misintepretation of your expected traffic, you need to reconsider if you really are the right person for the job *right now* and maybe gain some more experience before trying to spend other people's money on a ton of hardware that will just sit there, idle and consume huge amounts electricity (also paid by other people's money).

I'm serving a 6k/day website (scripting, database, some custom daemons etc.) from a Celeron 1.5GHz with 1GB RAM, and it's still doing almost nothing. If you really have to have some load balancing, get two of those for $100 each.

FTP proxy with IPv6 support (-1, Redundant)

Cronq (169424) | more than 5 years ago | (#27038425)

I was looking for a FTP reverse proxy that supports IPv6 (to make a IPv4 site visible over IPv6, too) and found nothing so far.

Does anyone know such beast?

Pointless (5, Informative)

ledow (319597) | more than 5 years ago | (#27038429)

1000 users a day? So what? That's less than one user a minute. Even if you assume they stay on the website for 20 or so minutes each, you're never looking at more than about 20 users at a time browsing content (there will be peaks and troughs, obviously). Now picture a computer that can only send out, say, 20 x 20 pages a minute (assuming you're visitors can visit a full page every 3 seconds) - we're talking "out of the Ark". Unless they are downloading about half a gig of video each, this is hardly a problem for a modern machine.

I do the technical side for a large website which sees nearly ten times that (as far as you can trust web stats) and it runs off an ordinary shared host in an ordinary mom-n-pop webhosting facility and doesn't cost anywhere near the Earth to run. We often ask for more disk space, we've never had to ask for more bandwidth, or more CPU, or got told off for killing their systems. Admittedly, we don't do a lot of dynamic or flashy content but this is an ordinary shared server which we pay for out of our own pockets (and it costs less than our ISP subscriptions for the year, and the Google ad's make more than enough to cover that even at 0.3% clickthrough). We don't have any other servers helping us keep that site online (we have cold-spares at other hosting facilities should something go wrong, but that's because we're highly pedantic, not because we need them or that our users would miss us) - one shared server does the PHP, MySQL, serves dozens of Gigabytes per month of content for the entire site, generates the statistics etc. and doesn't even take a hit. I could probably serve that website off my old Linux router over ADSL and I doubt many people would notice except at peak times because of the bandwidth.

Define "massive" too... this site I'm talking about does multiple dozens of Gigabytes of data transfer every month, and contains about 10Gb of data on the disk (our backup is now *three* DVD-R's... :-) ). That's *tiny* in terms of a lot of websites, but equally puts 99% of the websites out there to shame.

Clustering is for when you have more than two or three servers already and primitive load-balancing (i.e. databases on one machine, video/images on another, or even just encoding half the URL's with "" etc.) can't cope. In your case, I'd just have a hot-spare at a host somewhere, if I thought I needed it, with the data rsync'd every half-hour or so. For such a tiny thing, I probably wouldn't worry about the "switchover" between systems (because it would be rare and the users probably don't give a damn) and would just use DNS updates if it came to it. If I was being *really* pedantic, I might colo a server or two in a rack somewhere with the capability for one to steal the other's IP address if necessary, or have DNS with two A records, but I'd have to have a damn good reason for spending that amount of money regularly. If I was hosting in-house and the bandwidth was "free", I'd do the same.

Seriously - this isn't cluster territory, unless you see those servers struggling heavily on their load. And if I saw that, I'd be more inclined to think the computers were just crap, the website was unnecessarily dynamic, or I had dozens-of-Gigabytes databases and tens or hundreds of thousands of daily visitors.

You're in "basic hosting" territory. I doubt you'd hit 1Gb/month traffic unless the data you're serving is large.

Failure scenarios and costs. (1)

samson13 (1311981) | more than 5 years ago | (#27038437)

If your planning a HA solution my first step is to decide what you are trying to protect against, what the cost/consequence of these events occurring and a method to test failure events.

I've seen projects where the HA configuration has contributed to more downtime than any specific failure. I've seen projects that were too "important" to schedule test failures so when it did fail it didn't fail over.

In a lot of cases if a specialist site is down then people would come back later. If your consequences are not that high for an outage then save your money for good backups and good support contracts and maybe a cold/warm spare. If slashdot crashed now I'd just check again next time I had a chance.

A HA solution has to be designed from end to end. This isn't easy and some of your components may not work in a compatible way(black box software). Static content can be pretty easy to load balance/failover but once you start getting into dynamic content things become more complicated and uncertain.

If you have to worry about session persistence an unexpected event might redistribute connections causing existing connections to break for something that was very transient. i.e. it amplifies a minor fault.

I've seen applications that didn't pass their status through to the web server. There was a significant back end failure and the web server was still returning "200 OK" responses to the requests. The other servers were still working correctly and due to session persistence the people diagnosing the issue initially didn't realise that 25% of sessions were empty pages. The developer should have provided checks in their code, the load balancer could have done a different check, the initial level 1 support didn't really understand the system. All these have costs and consequences. i.e. development time and skills, risk that a content change might cause a service check to fail, training costs.

Depending on what you are doing... (1)

mlwmohawk (801821) | more than 5 years ago | (#27038439)

Buy two good quality machines and keep one as a hot spare and just backup every night.

The current "uptime" of a couple of my systems are 255 days, and that's only because of a power failure and subsequent end of generator fuel at my colo which no amount of on-site redundancy would have helped.

Good quality machines and software *will* run for a year pr more with no issues.

I've been setting up sites at data centers for about 10 years now, seriously, do the cost/benefit analysis, the base price is a couple machines, colo, and a backup strategy. Use the stand-by as a backup server, and download from that nightly. You can figure access to internet + 5 minutes to shut down or repair the non-working box, and if necessary active a new IP address on the stand by system. The probability of a good system running a solid OS -- FreeBSD or CentOS failing is pretty low. Good software components don't often fail or if they do, restart.

Seriously, a few of the sites I run have NO redundancy and my biggest risk is NStar and Sprint.

For a fully redundant system, two load balancers, at least 4 servers (two for each load balancer -- redundancy), two high speed switches, etc. etc.

Hardware failure happens but but not that frequently after the first week of service. I have two machines at a colo that are, no joke, 10 years old this year. A few years ago, I replaced the hard disks. This year they will be upgraded -- maybe :-)

Do it the way google doe it :) (1, Informative)

Anonymous Coward | more than 5 years ago | (#27038477)

Buy 2 very cheap computers with double HDs. You can get them for less than 200$ each. Then install BSD/linux with mirrored raid. Then you can use rsync/unison/name your favorite synchronization tool to mirror data between computers.

Then use [] or [] . You will get relative easy setup, excellent performance, unbeatable stability and good load balancing that scales to 10k+ users in a hour.

Of course all is depend if you use bloatware or not. It is very easy to make dynamic content generation and database stop scalability to only few connections.

So all basic tools are easily available from any free server distribution.

IF YOU WANT 100% AVAILABILITY: Don't forget your networking stuff. You have to have 2 routers and 2 Internet connections. This is why server hosting companies are 10x better and cheaper than doing your own server.
From hosting company you get 24h administration and regular backups. And as a bonus you get pre-installed and pre-configured environment.

We will load test... (2, Informative)

nicc777 (614519) | more than 5 years ago | (#27038521)

I see there are already a ton of good advice here, so when you have your kit set-up, post a link so that we can load test your config :-)

It's called the slashdot effect and if anything, you will at least know when things break and how your configuration handle these fail over conditions.

PS: This is cheaper then buying load testing kit and software :-)

Re:We will load test... (0)

Anonymous Coward | more than 5 years ago | (#27038801)

PS: This is cheaper than buying load testing kit and software :-)

There, fixed that for you.

Re:We will load test... (1)

nicc777 (614519) | more than 5 years ago | (#27039123)

When will FF get a grammar check?

real answer? (0)

Anonymous Coward | more than 5 years ago | (#27038541)

I mean no offense, but so far everybody has been quick to point out that load balancing and stuff isn't what the user needs -- but yet, nobody has came forward with an actual answer.

Round-robin DNS with 2 or 3 Apache Boxes (2, Insightful)

U96 (538500) | more than 5 years ago | (#27038637)

I remember initially setting up our little site with 3 servers and a "managed" loadbalancer/failover solution from our hosting provider. Our domain name pointed to the IP address of the loadbalancer.

I learned that "managed" is actually a hosting company euphemism for "shared" and performance was seriously degraded during "prime time" everyday.

We eventually overcame our network latency issues by ditching the provider's loadbalancer and using round-robin DNS to point our domain name at all three of the 3 servers.

I was using Apache + JBoss + MySQL, and on each server I configured Apache's mod_jk loadbalancer to failover using AJP over stunnel to the JBoss instances on the other 2 servers. I also chose to configure each JBoss instance to talk to a MySQL instance on each box, these being configured in a replication cycle with the other MySQL instances for hot data backup.

For our load, we've never had any problems with this. The biggest component with downtime was JBoss (usually administrative updates), but Apache would seamlessly switch over to use use a different JBoss instance.

One of the servers was hosted with a different provider in a different site.

Simple method to provide HA to static websites (1)

this great guy (922511) | more than 5 years ago | (#27038645)

Has any /.er implemented the following ultra-simple solution to provide HA for websites serving static content: having the website DNS name resolve to 2 IP addresses pointing to 2 different servers, and simply duplicating the static content on the 2 servers ? How do browsers behave when 1 of the server goes down ? Will they automatically try to re-resolve the DNS name and attempt to contact the 2nd IP ? Or is the well-known DNS pinning security feature preventing them from falling back on the 2nd IP ?

get a VPS (1)

Rythie (972757) | more than 5 years ago | (#27038653)

Why don't you get a small VPS system? and upgrade if/when you need more power.

You get redundant Power/Disk/Networking all for a much lower cost than a dedicated box. If a phyiscal system dies (quite unlikely anyway) they can move your VPS to another machine and it should be up again pretty soon - which should be good enough for that many users.

You don't need high availability (3, Insightful)

sphealey (2855) | more than 5 years ago | (#27038693)

First, I suggest you read and think deeply about Moens Nogood's essay "So Few Really Need Uptime" [] .

Key quote:

===Typically, it takes 8-9 months to truly test and stabilise a RAC system. As I've said somewhere else, some people elect to spend all of those nine months before going production whereas others split it so that some of the time is spent before and, indeed, some of it after going production.

But that's not all: Even when the system has been stabilised and runs fine, it will a couple of times a year or more often go down and create problems that you never saw before.

It's then time to call in external experts, but instead of just fixing the current cause of your IT crisis, I'd like to suggest that you instead consider the situation as one where you need to spend a good deal of resources in stabilising your system again - until the next IT crisis shows up.

Your system will never be truly stable when it's complex. The amount of effort and money you'll need to spend on humans being able to react to problems, running the system day-to-day, and - very important - keep them on their toes by having realistic (terribly expensive) test systems, courses, drills on realistic gear, networks of people who can help right now, and so forth... is huge.

The ironic thing is this: If you decide that you can live with downtime, and therefor with a much less complex system - your uptime will increase. Of course. ===

And that corresponds pretty well to my experience: the more effort people make to duplicate hardware and build redundant failover environments the more failures and downtime they experience. Consider as well the concept of ETOPS and why the 777 has only two engines.


Some solutions (1)

subreality (157447) | more than 5 years ago | (#27038695)

Others have already covered the "1000 users isn't much" aspect. Benchmark, and verify what each server can handle of your anticipated load, but they're probably right.

Option 1: Don't do it yourself. Look into renting servers from a hosting company. They will often provide HA and load balancing for free if you get a couple servers. Also, having rented servers makes it much easier to scale. If you find that you have 100,000 uniques per day, you can order up a bunch more servers and meet the load within minutes to hours. If you overbought, you can scale back down just as fast.

Option 2: [] plus [] . You use LVS to load balance out to a cluster (including removing failed servers from the pool). You use HA so that two LVS machines can fail over to each other. Note that you can run LVS on the same machines as your load, for a small environment. This is much more DIY than the Windows setup, of course... But honestly, if the setup requirements of this scare you away, then you're not ready to run a fault-tolerant network, regardless of OS.

Option 3: [] . Less DIY, more money. Perhaps that's better for you.

Option 4: Buy a commercial solution. Every major network vendor sells a HA/LB product. I've used them from most of the big players... I'm not going to write a review here, but it'll suffice to say that while they each have their good and bad points, any of them will get the job you've outlined done.

As for the network: The general rule is to reduce your single points of failures to the minimum you can afford. Common ones are: The ISP (BGP is a pain); the routers (Each ISP goes to its own router); the switches between (you need to full-mesh links from the two routers to two switches, down through the line as many layers as it goes; your switches need to run STP or be layer 3 switches running OSPF or another routing protocol; don't forget to plug the load balancers into different switches); the power (Servers, switches, and routers on separate UPSes such that losing one will leave a fully functioning path); and depending on how far you want to take this, the data center itself (in case of fire/meteor/EPO mishaps).

Note that all of this is required even for your Windows solution. Are you sure you don't want option 1? :)

As already stated : HAProxy (5, Informative)

amaura (1490179) | more than 5 years ago | (#27038711)

If you're looking for a lightwheight open source loadbalancer with a lot of features, go for HAProxy. In my company we work with F5 Big IPs, Alteon, Cisco CSS which are the leading load balancers from the industry, they are really expensive and depending on the licence you buy, you won't have all the features (HTTP level load balancing, cookie insertion/rewriting). We first used HAProxy for POC and now we're installing it in production environnements, works like a charm on a linux box (debian and RHEL5) with around 600 users.

One more thing. (4, Insightful)

OneSmartFellow (716217) | more than 5 years ago | (#27038745)

There is no way to be fully redundant unless you have independent power sources, which usually requires your backup systems to be geographically separated. In my experience, loss of power is the single most common reason for a system failure in a well designed system (after human error that is).

That's how Microsoft makes its money (1)

sphealey (2855) | more than 5 years ago | (#27038759)

=== Reading about Windows 2003 Clustering makes the whole process sounds easy, while Linux and FreeBSD just seem overly complicated. ===

Well, yes, that is how Microsoft makes its money: by releasing versions of complex technology that seem easy compared to the archaic legacy technology. Key word there is "seem", of course; when the chips are really down you will find out if (a) the Microsoft system was as good as, or even the equivalent of, the "archaic" version (b) your deep understanding of the problem you are facing, and ability to fix it, has been improved or disimproved by having the complexity hidden from you by a friendly interface.

YMMV. Obviously Microsoft shifts a lot of kit.


By the way, I would look at Contegix, Connectria, or similar hosted services provider serving small and medium sized businesses. If you are unfamiliar with the technology hand it over to someone who is whose price is reasonable.

You don't need ms cluster but load balancing (1)

fredc97 (963879) | more than 5 years ago | (#27038789)

Hi, for up to 10000 users per day one windows server can easily handle the load. If you need higher availlability then you can use Windows Network Load Balancing service which is available in the standard edition of windows. You still have to replicate all your data manually, but since each server has a local copy of pages and data then even when you patch your windows server (once a month on patch tuesday) or just reboot then the second node will take over the shared IP address and your visitors will see minimal disruption of service. The only problems you will have to deal with will be user uploads and database sync if you want each of your server to have a local copy. Otherwise you can also use a third server if you need database service, but that server would not be redundant. The only way to make an MS SQL server redundant would be with the clustering service that comes with windows Enterprise and SQL 2005 Standard, but watch out for the licensing costs. Ah and you need also a SAN for your database storage. So in essence: 2 web servers with windows network load balancing = cheap 2 MS SQL servers with cluster service = very expensive My recommendation: Buy decent hardware with good support (any of the big three: IBM, Dell, HP) because when hardware fails you need that motherboard, power supply, hdd or memory ASAP Use RAID 1 or RAID 5 for ALL storage, you want high availlability after all, I prefer Hot Plug drives, you don't want downtime because you swap a HDD and HDDs are like consumables these days Use windows network load balancing if you can afford it to maintain web site availability. Learn Linux if you want a cheaper licensing. Consider all the costs associated with database clustering, it can easily run you into a 100 000$ solution for an MS SQL solution

It's a bit overkill (0)

Anonymous Coward | more than 5 years ago | (#27038813)

One of my clients recently had 100,000 unique visitors an an hour, on a single web server, and a single database server.

You should be fine with decent shared hosting.

Do what's right for the customer (0)

Anonymous Coward | more than 5 years ago | (#27038819)

Seriously, if this is a non-profit then fiduciary responsibility is probably very important to them. I'm sure they are excited to have someone like you help them but don't use them to "play" enterprise admin. The numbers you have presented are miniscule and I doubt your data is so critical that it requires absolute 24x7 uptime. The amount you would cost them for 1 server would pay for web hosting for several years at a provider as well as greatly reduce the amount of administration.

If you want to be a sysadmin then remember the most important tenet. Always do right by the customer.

OpenBSD, of course. (0)

Anonymous Coward | more than 5 years ago | (#27038833)

OpenBSD of course. It was just discussed on how they're in the process of changing some of the relevant code to even better improve things.

As is now, you can easily do exactly what you need with OpenBSD and CARP (and some other related tools in th base system) - for Free and Securely!

Stonesoft solution (1)

HeraldMage (50053) | more than 5 years ago | (#27038837)

If high availability is your concern, then you need redundancy from end-to-end, not just in the servers. A cost-effective way to do that is use Stonesoft's firewall/VPN solution. It can load balance DSL, cable modem and other Internet connections, clusters the devices themselves, and perform back end server load balancing of your Web servers. The centralized management is very powerful as well. 30 day evaluations available off their Web site.

[full disclosure: I own no monkeys, but I do work for Stonesoft]

Clustering is a marketing concept (0)

Anonymous Coward | more than 5 years ago | (#27038859)

I do realise that clustering has it's uses, but the truth is that most clustering and HA solutions were merely marketing tricks to sell consulting and expensive hardware to gullible IT managers with an overblown sense of self importance. The more money you spend the more important you are. Right?

How else are you going to justify your huge salary.

CentOS/HA (5, Informative)

digitalhermit (113459) | more than 5 years ago | (#27038867)

It's fairly trivial to install RedHat/CentOS based clusters, especially for web serving purposes.

There are a few components involved:
1) A heartbeat to let each node know if the other goes out.

2) Some form of shared storage if you need to write to the filesystem.

3) Some methood of bringing up services when it fails over.

A web server with a backend database is one of the canonical examples. You'd install the heartbeaat service on both nodes. Next, install DRBD (distributed replicated block device). Finally, configure the services to bring up during a failure. The whole process takes about an hour following instructions on places like HOWTOFORGE.

But 1000 visitors a day is not much. It's small enough that you could consider virtualizing the nodes and just using virtualization failover.

Hire a technical architect (2, Informative)

Anonymous Coward | more than 5 years ago | (#27038917)

There are way to many questions that need to be known before a competent technical architect can help design the "just right" solution for you.

Most of the people here are experts on some small part of the solution and will spout "all you need is X" - and that's fine for free. I've worked on telecom - can never go down - systems for over 10 tens as a technical architect leading project teams from 1 to over 300 software developers and 20 others on the hardware side.
On the surface, FTP and web pages don't sound like the best solution to the problem as stated. Did yo just learn HTML and want to use it?

Now, here's my $0.02 on your problem:
* 1,000 visitors a day can be run from my cell phone. That's "nothing" traffic for a network or an old desktop.
* Avoid clustering at the OS or application level unless you really, really need it. You probably don't. Almost nobody needs clustering.
* Use network load balancing. There are many, many solutions for this. The easiest is from F5 (buy through Dell), but free versions work fine too - I've been using `pound` for years myself. /. may still use pound for load balancing, so you know it scales.
* Backups are key. RAID is not backups. Verify that you can actually **recover** from bare metal using your backups. Don't pull a Ma.gnolia []
* Disaster Recovery is important. Often, you can solve both backup and recovery and DR at the same time.

If you are a non-profit doing something I believe in, I'll do network, systems, B&R, and DR deigns and consult with you for free, an enterprise class solution. My company looks at FOSS solutions first, before recommending commercial, costly solutions. All our internal systems are FOSS, though we do have a lab with Microsoft servers since that's what many customers demand/need.

Think of a good TA just like a CPA or Lawyer. You pay us to prevent all the problems that could happen later that cost your huge amounts of money. After my CPA does my taxes, I sleep better at night.

Too Obvious ! (0)

Anonymous Coward | more than 5 years ago | (#27038953)

Your best solution:
    An ordinary PC with Centos (or equiv.) loaded.
You will have at the end of the day:
    1) A perfectly good solution for your application.
    2) Learned that Linux is not hard to learn and that the Linux community supports you better than M$.
    3) Your pride will be intact. More money for your non-profit, less for Steve Ballmer.

Microsoft is expensive (1)

lucm (889690) | more than 5 years ago | (#27038989)

With 1000 users if you want SQL Server you need to purchase a processor license: 5k$/CPU for Standard Edition, 25k$/CPU for Enterprise. (You only license physical CPU, not cores or hyperthreading). Add the Windows license (6k$). And you have no hardware yet.

The "good news" is that with failover clustering (which is all you need cause 1000 users does not require load-balancing), Microsoft requires licenses only for the active node. And the failover node can be cheaper hardware, as it will run only under abnormal situations and can offer a lesser performance (management is usually ok with that).

If you go with Linux + Postgres or MySQL, you pay no licenses. Those products are a bit less user-friendly, but they give you more control over your setup. Use database clustering and/or replication, and use either one of the many free load-balancing software or pay for a very good one (like Zeus).

Based on my experience, I would say: for a small intranet, use Microsoft (Windows, SQL Server, Sharepoint) because you can leverage on MS-Office and powerful groupware tools (project management, BI, reporting) and actually provide value to your end-users. But for a large intranet or for public-facing sites, where you don't control the end-users platform, use Linux, it's worth the learning curve.

Sun gives you easy web clustering (0)

Anonymous Coward | more than 5 years ago | (#27039005)

A few years ago we were facing the problem of the need to host/maintain a Java webservice. We started to look into common Java containers like Tomcat, JBoss and naturally Glassfish. The only problem we saw was that the application server had to function as a backend and thus we would need the webserver to relay requests.

Eventually we stumbled upon the Java System webserver 7 [] and that turned out to be much more than merely a webserver with a nice administrative interface. If you're used to administrating Apache servers then it can be a bit tricky to get used to this since the server fully uses XML for its configuration files (that is, if you chose not to use the admin. interface). At first we focussed fully on the Java container, but eventually started to discover that you could do a whole lot more with this critter.

Personally I think it really excells at clustering. If you made changes on one node then one command (or 2 clicks of the mouse) is enough to distribute those changes all over the cluster. Next to that it has excellent (online) documentation [] and is free for use just like Apache is. Oh, and before I forget.. While it is aimed at Java usage its also perfectly capable of supporting other languages like PHP. Either by using a PHP addon [] or simply setting up PHP as some sort of "back end" (allow use of FastCGI for example).

Considering the price and the ease of use (setup a cluster in approx. six steps [] ) I think this might be just what you want. And its onboard extensive statistics engine will allow you to clearly see for yourself if the load on your park is getting too high or not.

And yes, I agree with most other reactions that your load really doesn't need clustering. I'll add a little more to that; the service I mentioned above is currently still running on a single Webserver 7 instance and easily deals with more than that amount. We did tune the Java container to suit our needs, but apart from that even an app. server should be capable of handling this load. But having said that I think you might find this webserver very usefull nonetheless. Especially the administrative interface might save you guys a lot of tweaking.

Citrix XenServer is good (1)

cyberspittle (519754) | more than 5 years ago | (#27039021)

Although Citrix XenServer is based on Linux, it has a Windows interface for management, which makes most tasks easy.

What is your bosses phone number? (1)

codepunk (167897) | more than 5 years ago | (#27039025)

Linux over complicated...ha ha

I will sell him a system fully capable of handling ten times that traffic with hot standby failover for 50 bucks a month with ds3 bandwidth available to it.

Use CARP (2, Informative)

chrysalis (50680) | more than 5 years ago | (#27039051)

CARP is a protocol that does automatic load balancing and IP failover.

Install your application on 2 (or more) servers, give them the same address virtual IP address using CARP, et voila. Nothing more do buy, and no need to install any load balancer.

CARP's reference implementation is on OpenBSD, and it's shipped by default. DragonflyBSD, NetBSD and FreeBSD ship with an older version.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>