Apache Bandwidth Limiting? 44
IOOOOOI asks: "I work at a high traffic web hosting company and we're trying to find a simple effective way to limit bandwidth hogs, some of whom we've clocked pulling over 4Gb/hr off our servers. We've tried mod_throttle and have looked into QoS/fair queuing as well as a couple of custom solutions in-house. None of these quite did the trick. Has anyone found an effective way to do this, one that can handle individual connection streams?"
Maybe I'm missing something... (Score:2)
This rectifies the disparity between flat rate pricing and incremental bandwidth costs.
When I went to find a solution for my web appliction, I chose to put in a DSL line and host it myself (Because of the complexity of the app, this is cheaper than colocating my computers there).. but I chose a DSL provider that doesn't give "all you can eat" - but instead charges for bandwidth.
The reason I chose this is the theory that the bandwidth hogs would go elsewhere and the latency at this ISP would be much lower. so far this has proven tru, and I've yet to exceed the basic "free" bandwidth level.
If, on the other hand, you're talking bout people who are downloading your customers content at huge rates, then maybe you should charge your customers based on the service they are providing. If they're hosting lots of large files, they should probably be paying more...
Dunno if that's a viable solution-- but smart customers will prefer someone who charges "by the byte"... because the bytes are better quality.
Re:Maybe I'm missing something... (Score:3)
What apache needs is something where you can say
someone who is visiting the site only gets x amoutn of bandwith. And if someone tries to use up too much by downloading too much, stop sending things to them.
Re:Maybe I'm missing something... (Score:1, Funny)
http://humor.student.utwente.nl/images/bil
Will put those fucking punts in their place.
Re:Maybe I'm missing something... (Score:1)
Re:Maybe I'm missing something... (Score:1)
Re:Maybe I'm missing something... (Score:1)
Re:Maybe I'm missing something... (Score:3, Insightful)
The issue is not about our customer's bandwidth consumption and how much they can/can't use. It's about being able to provide services to all of their users without experiencing slowdowns because of the occasional hog.
Are your slowdowns bandwidth or CPU based? If you are serving lots static content (like porn), then Apache is going to kill you, due to its process-per-connection model, which the developers refuse (read: are too lazy) to fix. Zeus doesn't have this problem. Neither do the open source boa or thttpd (but they unfortunately lack many important features that may stop them from being used for commercial web hosting). Zeus will allow you to max out your network card (100mbit) on a modest machine (P3/500 w/ 1gb RAM).Re:Maybe I'm missing something... (Score:1)
Common, have you tried to tune Apache?
Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.
Re:Maybe I'm missing something... (Score:2)
Common, have you tried to tune Apache? Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.
I seriously doubt it, not in real world conditions. When you include things like mod_php and mod_perl, those Apache processes get big. Our hosting servers (running Zeus) get 15-20 thousand hits a minute. That's ~333 hits per second. Say each client is downloading 50k images at 2k per second. That means you have 300+ new connections opening per second, that stay open for 25 seconds. So you need to be handling 7500+ concurrent connections.
Keep alives and such will help with this, but a high traffic HTTP server needs to handle at least 1000-2000 connections concurrently. Show me a p3/500 that is running 2000 Apache processes, and processing scripts, etc., and isn't dying. It just won't happen. The process switching overhead alone will kill you. Read this page [kegel.com], then tell me that Apache's I/O model doesn't suck.Re:Maybe I'm missing something... (Score:1)
a few hypothetical situations:
* customer cannot afford to pay for bandwidth. customer leaves hosting company for another provider and hosting company has to eat the bill.
* customer is getting hammered so hard that they affect other customers, resulting in a bunch of cranky customers with slow websites.
it doesn't matter whether you offer unlimited bandwidth or charge per byte/mb/gb/whatever.. problems can still arise when someone's site gets slashdotted or someone leaks a password for a porn site..
altqd (Score:3, Informative)
Packeteer (Score:2, Informative)
Re:Packeteer (Score:1)
I'm glad you asked (Score:1)
One solution if you have mod_perl (Score:5, Informative)
mod_bandwidth (Score:2, Informative)
It works quite well and will throttle per-connection or per-virtualhost.
my experiences (Score:2)
I did play with mod_throttle, and all it did was actually allow all traffic until the limit was reached, and then deny the next new connections. Hmm, not too great actually.
I'm planning to try out mod_bandwidth, but I dunno if it works different.
Bad link (sorry, I don't feel for html now):
http://www.cohprog.com/v3/bandwidth/doc-en
I tried playing with QoS on linux 2.4.
According to the documentation it's actually quite hard to have that functional, because if you have a 10 Mbit connection, it will shape the traffic elative to that. But 10 Mbit is not always the same. If you have lots of lost packets it will behave different then with a perfect connection.
In my experience I couldn't reliably limit the traffic on a 10 Mbit connection down to 80 kbit (almost 1% of the 10 Mbit). My cable connection of 16 kbyte still could get choked.
Maybe I should just get a card of 1 Mbit and try again, the numbers might be better then.
Or hey, a card of 100 Kbit
Re:my experiences (Score:2)
No, not really, I would have to type all
tags, which is quite annoying imo.
And for 10 or 12 lines of text I need to format it by hand.
Squid as accelerator (Score:2)
Derek
You're missing the point (Score:3, Insightful)
"Blocking" network appliances such as Packeteer can't handle these high rates, and even if they had gigabit interfaces, they would only be able to do 600-800mbps on them.
None of the kernel QoS/queueing options I've seen allow for anything other than classifying traffic or "fair" queueing. None of this seems to help someone that wants to limit all webserver connections to 2mbps - everything here is expecting an IP range, ports, or something to distinguish by. What if I don't want to?
Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...
Forgive me if I have overlooked the obvious...
Re:You're missing the point (Score:2, Interesting)
Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...
Then head for eBay, because a moderate-cost solution to your particular problem (limiting all web traffic to 2 megabits/s) is available for two bids and some cable work: buy two Ascend Pipeline 130s and run them back-to-back with a T1 cross-over cable. Another advantage of this solution is that your web server can be located near the webmaster, up to 5000 feet (without repeaters) from your network access point. Indeed, if you partition all of your services (mail, news, web server, ftp server) then no one service can completely swamp your connection.
Don't like using T1 routers? Then get a moderately powerful Intel computer, install enough Ethernet interfaces to satisfy your needs, load up a modern Linux distribution with 2.4.18 kernal and IPTABLES, and set up rules that will traffic-limit to the interface to which you connect your Web server. If you are like a lot of people who run multiple servers on the same box, the rules can "customize" the throttling by service. Not only that, but you can throttle by direction as well: incoming HTTP could be limited to 30 kilobits/s while outbound HTTP could be limited to 3 megabits/s -- that takes care of some of the problems with DoS attempts on HTTP. The same can be done for other services, such as FTP, mail, and IRC. The amount of control that IPTABLES provides is, well, interesting.
(Yes, I know that the *BSD people have something similar, but I know the IPTABLES stuff better and have seen it work.)
C'mon, people, this isn't all that hard to do if you think and are willing to put a little money where your wishes are.
Re:You're missing the point (Score:2)
I am having the same problem, and I think you guys are missing the point. He said 4GB an hour, which means he probably has an OC-3, OC-12, or Gigabit Ethernet connection.
That's only 9.1 mbps. T1 = 1.544, T3 = 44.736, OC1 = 51.84. OCx = OC1 * x.heh... (Score:2)
Using ipfw and dummynet on freebsd is the way
I have gone in a VERY high traffic hosting
and colo company.
you can not only simultae a link of a certain speed, but can also limit any ip that hits a certain destination to a max speed...
FreeBSD Traffic Shaping (Score:1)
here [onlamp.com] is a story to a problem that sounds identical to yours. A hosting company (using a virtual host) has a customer who uses exessive bandwidth, and they wish to throttle it. After trying mod_throttle, they went with a better solution.
If your not using FreeBSD, i am very suprised. Perhaps you should look into it.
D.
QoS with routers/switches can do the trick. (Score:1)
for a good tutorial on Traffic Policing and Traffic Shaping, two ways of doing what you require with Cisco hardware.
CAR in Cisco Routers (Score:1)
This is assuming that you're not running virtual hosting (multiple domains sharing one IP address), in which case all customers on that IP/physical port would be affected by the CAR limitations you would impose. It is possible with the amount of traffic you're talking about. Just make sure that the puppy has a good processor and plenty of RAM.
Use Zeus (Score:4, Insightful)
You want Zeus because it is high performance (it doesn't use the toy process-per-connection model). It comes with an easy to use, powerful web based GUI. The GUI doesn't just hold your hand. It lets you set everything, and then will show you the exact lines that are changing in the config files.
It doesn't use extremely complex format for config files that Apache uses. A good comparison is BIND and djbdns. Do you want to try and deal with the incredibly complex BIND zone files, or the simple, one record per line data files that djbdns uses? Zeus config files are one record per line of the form "modules!throttle!enabled yes". It also comes with tools that let you do everything from scripts. But only if you want to. Otherwise, use the GUI.
And speaking of throttling, Zeus does it correctly, unlike any other web server (at least any of the freely available UNIX ones, as that is all I am familar with). It will let you set a limit on the number of users, or set a max number of bytes per second on a virtual server or subserver level. It doesn't serve some people at max speed and then start dropping connections (mod_throttle) or set the throttle speed at the beginning of the request, then start dropping connections (thttpd).
Virtual servers in Zeus actually make sense. There is no master server configuration like in Apache. Instead, you create one or more virtual servers. As such, each virtual server has its own separate configuration. Virtual servers can serve a single website, or any number of websites, via subservers. Subservers all share the configuration of the virtual server (kind of like Apache's mass virtual hosting only much better). No more restarting the server to add a site. Simply create the directory, and it starts serving the site.
There are plenty of other reasons why Zeus is superior to Apache, but the ones I listed should be enough to start considering it. No, I don't work for Zeus or own stock (don't think they have any) or anything like that. I'm just a satisfied customer.
For some things, Apache works just fine. But for anything high traffic, requires throttling or needs a flexible or scripted configuration, Zeus beats Apache hands down. It's worth every penny. Check it out. I doubt you'll be dissapointed.
(subconscious message to Apache developers: stop being lazy and make Apache more like Zeus!)
Re:Use Zeus (Score:1)
Are the Slashdot readers this ignorant? Everyone else suggested QoS methods that would do nothing to help *per* user connections. Are people really this obtuse? The first poster and I were very clear about we wanted to do, and people came up with pretty lame stuff that was way off the mark.
The problem with the IT industry is that there are so many clueless people that have the "experience", and make good money. They dilute the talent and make it hard for a real wiz to make money anymore. How many people do you know that fall in the category: "Knows enough to be dangerous"?
Re:Use Zeus (Score:1)
Zeus really is great, it has some wonderful clustering features too, admin for the whole cluster can be done from one place. At the very least it's worth taking a look at the 30-day trial version to get an idea for how much work it would be to port the scripts across.
On a large site, you'll quite likely save the license cost by the decreased use of resources.
(AOLServer [aolserver.com] is a good server too, though it doesn't have the nice admin of Zeus there's a lot it can do and is also very efficient. I'm not sure whether it can throttle bandwidth by itself though).
netstat gives the data (Score:1)