Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Apache Bandwidth Limiting?

Cliff posted more than 12 years ago | from the whoa-cowboy! dept.

Apache 44

IOOOOOI asks: "I work at a high traffic web hosting company and we're trying to find a simple effective way to limit bandwidth hogs, some of whom we've clocked pulling over 4Gb/hr off our servers. We've tried mod_throttle and have looked into QoS/fair queuing as well as a couple of custom solutions in-house. None of these quite did the trick. Has anyone found an effective way to do this, one that can handle individual connection streams?"

cancel ×

44 comments

Sorry! There are no comments related to the filter you selected.

PRIMVM POSTVM (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#3918307)

ipsvm dolorvm est ... primvm postvm, bitches

Re:PRIMVM POSTVM (0)

Anonymous Coward | more than 12 years ago | (#3919194)

If anything, this should be modded troll, not flaimbait. It's just a typical FIRST POST, not anybody saying microsoft is good and should be allowed to rule the world. Since when was first post considered flaimbait?

Maybe I'm missing something... (2)

BitGeek (19506) | more than 12 years ago | (#3918314)

But the solution seems easy to me. Simply charge your customers for their bandwidth.

This rectifies the disparity between flat rate pricing and incremental bandwidth costs.

When I went to find a solution for my web appliction, I chose to put in a DSL line and host it myself (Because of the complexity of the app, this is cheaper than colocating my computers there).. but I chose a DSL provider that doesn't give "all you can eat" - but instead charges for bandwidth.

The reason I chose this is the theory that the bandwidth hogs would go elsewhere and the latency at this ISP would be much lower. so far this has proven tru, and I've yet to exceed the basic "free" bandwidth level.

If, on the other hand, you're talking bout people who are downloading your customers content at huge rates, then maybe you should charge your customers based on the service they are providing. If they're hosting lots of large files, they should probably be paying more...

Dunno if that's a viable solution-- but smart customers will prefer someone who charges "by the byte"... because the bytes are better quality.

Re:Maybe I'm missing something... (3)

Apreche (239272) | more than 12 years ago | (#3918361)

you are missing something. My friend runs a semi-large website and has had multiple hosts. On his website are very many images (not porn), mostly gif, jpg, etc. The rest is html and cgi. What happens is most visitors to the site come in and look at a few pictures, download a couple and leave. Very recently someone wrote a spider program that went through his entire website and downloaded every single image. He stopped it before they finished, but he had a very hard time finding a way to prevent it from happening again.

What apache needs is something where you can say
someone who is visiting the site only gets x amoutn of bandwith. And if someone tries to use up too much by downloading too much, stop sending things to them.

Re:Maybe I'm missing something... (0)

Anonymous Coward | more than 12 years ago | (#3918403)

Some porno sites have limits by account, and they run Apache.

Re:Maybe I'm missing something... (0)

Anonymous Coward | more than 12 years ago | (#3918493)

Very recently someone wrote a spider program that went through his entire website and downloaded every single image. He stopped it before they finished, but he had a very hard time finding a way to prevent it from happening again.

Dude, I don't know if you noticed, but that's what "THE WORLD WIDE WEB" is about. If you don't want people to download your images, you DON'T PUT THEM ON A WEB SITE. I bet you were one of those people who wanted the names and addresses of everyone who visited your "home page" too.

Re:Maybe I'm missing something... (0)

Anonymous Coward | more than 12 years ago | (#3928471)

Christ, Im finally glad someone said this. This is the most retarded thing Ive ever heard. Bandwidth throttling for web hosts is one thing but throttling the requests? Nice concept... Try messing with the URL (name value pairs) so it makes it a little bigger pain in the ass.

Re:Maybe I'm missing something... (1, Funny)

Anonymous Coward | more than 12 years ago | (#3918505)

Yes, a few bills like this:
http://humor.student.utwente.nl/images/bill -lullig .jpg

Will put those fucking punts in their place.

Re:Maybe I'm missing something... (1)

IOOOOOI (588306) | more than 12 years ago | (#3921278)

This is EXACTLY the goal we're trying to reach. We need to limit the rate of data streams so that the 10 or so users with fat pipes don't make it hard for the thousands of average users to get the content which they paid for.

Re:Maybe I'm missing something... (1)

MmmmJoel (26625) | more than 12 years ago | (#3933520)

Have you considered requiring registration?

Re:Maybe I'm missing something... (-1, Troll)

PhysicsGenius (565228) | more than 12 years ago | (#3918467)

What are you, some kind of fascist? Paying for stuff you use is an inherently unfair and biased methodology. Haven't you ever heard of Fair Use or the First Amendment?

I think the solution to this guy's problems is to use the GPL. If he'd open up his hosting company we could all mirror the content on a P2P, which would reduce his bandwidth needs.

Re:Maybe I'm missing something... (1)

IOOOOOI (588306) | more than 12 years ago | (#3919354)

The issue is not about our customer's bandwidth consumption and how much they can/can't use. It's about being able to provide services to all of their users without experiencing slowdowns because of the occasional hog.

Re:Maybe I'm missing something... (3, Insightful)

Electrum (94638) | more than 12 years ago | (#3922092)

The issue is not about our customer's bandwidth consumption and how much they can/can't use. It's about being able to provide services to all of their users without experiencing slowdowns because of the occasional hog.

Are your slowdowns bandwidth or CPU based? If you are serving lots static content (like porn), then Apache is going to kill you, due to its process-per-connection model, which the developers refuse (read: are too lazy) to fix. Zeus doesn't have this problem. Neither do the open source boa or thttpd (but they unfortunately lack many important features that may stop them from being used for commercial web hosting). Zeus will allow you to max out your network card (100mbit) on a modest machine (P3/500 w/ 1gb RAM).

Re:Maybe I'm missing something... (1)

tigga (559880) | more than 12 years ago | (#3929317)

Heh,

Common, have you tried to tune Apache?
Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.

Re:Maybe I'm missing something... (2)

Electrum (94638) | more than 12 years ago | (#3929367)

Common, have you tried to tune Apache? Process-per-connection not a problem - you just have to keep process pool big enough.. There are some other tricks, but you could saturate 100Mbit network with p3/500 and Apache as well.

I seriously doubt it, not in real world conditions. When you include things like mod_php and mod_perl, those Apache processes get big. Our hosting servers (running Zeus) get 15-20 thousand hits a minute. That's ~333 hits per second. Say each client is downloading 50k images at 2k per second. That means you have 300+ new connections opening per second, that stay open for 25 seconds. So you need to be handling 7500+ concurrent connections.

Keep alives and such will help with this, but a high traffic HTTP server needs to handle at least 1000-2000 connections concurrently. Show me a p3/500 that is running 2000 Apache processes, and processing scripts, etc., and isn't dying. It just won't happen. The process switching overhead alone will kill you. Read this page [kegel.com] , then tell me that Apache's I/O model doesn't suck.

Re:Maybe I'm missing something... (1)

dev0n (313063) | more than 12 years ago | (#3919682)

it may seem easy to you to just charge the customer.. but as the technical support manager at a web hosting company, i can assure you that it isn't. :)

a few hypothetical situations:

* customer cannot afford to pay for bandwidth. customer leaves hosting company for another provider and hosting company has to eat the bill.

* customer is getting hammered so hard that they affect other customers, resulting in a bunch of cranky customers with slow websites.

it doesn't matter whether you offer unlimited bandwidth or charge per byte/mb/gb/whatever.. problems can still arise when someone's site gets slashdotted or someone leaks a password for a porn site.. :)

Are you using.. (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3918330)

.... a Beowolf cluster ?

Linux is dead. Long live Microsoft!

Censordot!! by poopbot (-1)

adexonq (587094) | more than 12 years ago | (#3918331)


Version 1.1.8 (last updated 19th July 2002 by Anonymous Coward)

Note to moderators : Do not moderate this post down, if you do then you support the editors stance on censorship and you support the end of free speech and support evil organisations like Microsoft, RIAA, MPAA and laws like the CBTBA and DMCA

Sign this petition, let your voice be heard! [petitiononline.com]

Slashdot is using censorship! It is trying to eridicate free and open discussion like we know slashdot to be, it has the following RESTRICTIONS in place to Censor you

They claim they don't, but they do, wonder why their are so many trolls, crapflooders and lamers on slashdot, because they are fighting for their rights! Slashdot is trying to silence the trolls. Remove the filters, the trolls get bored, and slashdot will be troll free!
  • Lameness filters (It blocks a lot of legitmate posts)
  • Unnessary posting delays. Hasnt taco learned to touch type? A lot of posts are typed in less than 20 seconds and it is a ANNOYING DELAY! 2 minute ban? Come on, so some are faster then others, big deal, some people have more to say than others
  • Broken moderation system, The whole point is to sort the gems from the crap, yet a lot of posts designed to make a LIVELY DISCUSSION are MODERATED as flamebait! Come on, not everyone likes X, but just because some one bashes it dosent mean its Flamebait. Flame bait is more useful for DIRECT INSULTS and not legitmate discussions.
The "troll" moderation reason is fragmented and broken, why? Because they are trying to use an obsolete usenet term on a realtime discussion, "trolls" can cover a huge blanket of ideas.
  • Crapfloods, a meaningless flood of random letters or text, which the lameness filter does a crappy job at trying to stop, besides trolls have written tools using the opensource slashcode to generate crapfloods which bypass the filter
  • Links to offensive websites, the most common one is known a http://www.goatse.cx, a awful site which shows a bleeding anus being stretched on the front page. Trolls sneak these links in by posting messages that look legitimate, but infact are sneaky redirects to the site. Common examples include rd.yahoo.com, www.linux-kernel.tk, goatsex.cjb.net, and googles "Im feeling lucky".
  • Trying to break slashdot, this is actually a good thing, as it helps test slashdot for bugs. Famous examples include the goatse.cx javascript pop-up, the pagewidening post and the browser crashing post!
Subnet banning, this bans a user unless they email jamie macarthy with their mp5ed ipids. This is unfair, and banning a subnet BLOCKS A WHOLE ISP SOMETIMES, and not that individual user! This can cause chaos! But real trolls use annoymous proxys to get around this so THIS JUST BANS LEGITMATE USERS! Also, they are trying to censor some anoymous proxies, mainly from countrys like africa, so this yet more DISCRIMINATION!

But, the issue that concerens us the most, is the COMMENT QUOTA. A discrimatory system that stiffles discussion, cripples the community and will ultimateley destroy slashdot unless it is removed! Annoymous cowards are allowed only 10 posts a day! This is unethical! Users with negative karma only get two! That is DISCRIMINATION! How would you like to only be able to speak once a day, just because of the color of your skin. That would be racism, and slashdot is discrimitating on people just because of a negative number in a database! BOYCOTT SLASHDOT! LET THEM DIE!

We wan't these stupid useless restrictions REMOVED! This comment will be posted again and again until it does!

Inportant imformation for users
Boycott slashdot, they are pissing over their community, they are becoming like the RIAA and MICROSOFT! Do NOT TOLERATE THIS SHIT! Here are some real news for nerds sites. We don't need slashdot, slashdot deserves to die!

MSNBC [msnbc.com]
BBC NEWS [bbc.co.uk]
News.com [com.com]
Linux online [linux.org]
Linux daily news network [linuxdailynews.net] [linuxdailynews.net]
Weird news from dailyrotten.com [dailyrotten.com]
Trollaxor, news for trolls, they are real people too! [trollaxor.com]
CNN.com [cnn.com]
New york times (free registration required) [nytimes.com]
LINUX.com [linux.com]
News forge [newsforge.net]
K5 [kuro5hin.org]
Mandrake forum [mandrakeforum.com]
Toms hardware [tomshardware.com]
The register [theregister.co.uk]
Kde dot news [kde.org]
The linux kernel Archives [kernel..org]
Adequecy [adequacy.org]

There are hundreds more, But this is where slashdot STEALS THE MAJORITY OF its "news" from.

Punish them, here are their emails, spam them, flame them goatse them!
Rob malda [mailto]
Jamie Macarthy [mailto]
ChrisD [mailto]
Hemos [mailto]
Micheal [mailto]
Pudge [mailto]

The others ones apperantly dont have an e-mail, probably because ROB MALDA IS PRETENDING HE IS JOHN KATZ.

Thank you for reading this, please feel free to repost this information, please reply to add your comments, fight slashdot and its CENSORSHIP

Don't forget to sign the petition!

- posted by poopbot: the bot formerly known as pwpbot

MZ9sFg2Jr4 Post #302

altqd (3, Informative)

schmaltz (70977) | more than 12 years ago | (#3918363)

try altqd. i've only used it on openbsd, but with it you can selectively throttle bandwidth.

Packeteer (2, Informative)

shave (16748) | more than 12 years ago | (#3918418)

Not an Apache based solution, but check out Packeteer Packetshapers [packeteer.com] ..specifically the ISP models.. lets you set SLA's by protocol, IP, etc, perform rate limiting, and all other kinds of really cool stuff. Not exactly cheap but extremely effective, and simple to manage.

Re:Packeteer (1)

skodpc (38392) | more than 12 years ago | (#3932987)

It seems a little biased/uninformed to mention only the Packeteer product here, so I'll broaden the horizon a little: A solution that is comparably priced (Still very expensive) and IMHO is a better choice is the Allot Netenforcer [allot.com] . There is also P-Cube [p-cube.com] and F5 [f5networks.com] , but independent tests (and my own) makes the Allot-box the better bet. If you think Packeteer is easy to use, you should check them out yourself!

I'm glad you asked (1)

Blaze74 (523522) | more than 12 years ago | (#3918432)

I have used mod_bandwitdth to a certain extent, it may have what your looking for. I would love to hear about other solutions though.

do what the pros do (-1, Troll)

Anonymous Coward | more than 12 years ago | (#3918471)

linux kernel talk [linux-kernel.tk] uses dynamic-response throttling to limit bandwidth on individual pages. This is probably more fine-grained control than you need, but it can be done on per-account levels

basically, you can specify a threshhold for pipe staturation (% or #). You can set the nice-ness level of a page or domain, as well as minimum QoS, so a popuar site can be throttled back as the pipe gets aturated while allowing other sites to run unaffected.

One solution if you have mod_perl (5, Informative)

merlyn (9918) | more than 12 years ago | (#3918483)

The solution that the (defunct) etoys.com adopted for their site was based on code from one of my Perl columns [stonehenge.com] . My code is based on CPU throttling, but you can quickly change it to bytes sent using the same technology.

mod_bandwidth (2, Informative)

Gormless (30031) | more than 12 years ago | (#3918894)

I use mod_bandwidth [cohprog.com] at work to simulate 56k connections to the web server.

It works quite well and will throttle per-connection or per-virtualhost.

my experiences (2)

LinuxGeek8 (184023) | more than 12 years ago | (#3919178)

I'm not verey experienced with bandwidth limiting.
I did play with mod_throttle, and all it did was actually allow all traffic until the limit was reached, and then deny the next new connections. Hmm, not too great actually.

I'm planning to try out mod_bandwidth, but I dunno if it works different.
Bad link (sorry, I don't feel for html now):
http://www.cohprog.com/v3/bandwidth/doc-en. html

I tried playing with QoS on linux 2.4.
According to the documentation it's actually quite hard to have that functional, because if you have a 10 Mbit connection, it will shape the traffic elative to that. But 10 Mbit is not always the same. If you have lots of lost packets it will behave different then with a perfect connection.
In my experience I couldn't reliably limit the traffic on a 10 Mbit connection down to 80 kbit (almost 1% of the 10 Mbit). My cable connection of 16 kbyte still could get choked.
Maybe I should just get a card of 1 Mbit and try again, the numbers might be better then.
Or hey, a card of 100 Kbit :-) should be doing perfect.

Re:my experiences (-1, Offtopic)

pediddle (592795) | more than 12 years ago | (#3919501)

Bad link (sorry, I don't feel for html now):

The sad part is it took you longer to type this than to type <a href=""></a>.

Re:my experiences (2)

LinuxGeek8 (184023) | more than 12 years ago | (#3921619)

The sad part is it took you longer to type this than to type .

No, not really, I would have to type all
tags, which is quite annoying imo.
And for 10 or 12 lines of text I need to format it by hand.

Squid as accelerator (2)

d-rock (113041) | more than 12 years ago | (#3919336)

You could look at using a combination of content acceleration and bandwidth pools in squid [squid-cache.org] . I've used these features before and it actually works pretty well for static content. You can tune the caching params to allow for large files, etc.

Derek

You're missing the point (3, Insightful)

gibmichaels (465902) | more than 12 years ago | (#3919388)

I am having the same problem, and I think you guys are missing the point. He said 4GB an hour, which means he probably has an OC-3, OC-12, or Gigabit Ethernet connection.

"Blocking" network appliances such as Packeteer can't handle these high rates, and even if they had gigabit interfaces, they would only be able to do 600-800mbps on them.

None of the kernel QoS/queueing options I've seen allow for anything other than classifying traffic or "fair" queueing. None of this seems to help someone that wants to limit all webserver connections to 2mbps - everything here is expecting an IP range, ports, or something to distinguish by. What if I don't want to?

Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...

Forgive me if I have overlooked the obvious...

Re:You're missing the point (2, Interesting)

satch89450 (186046) | more than 12 years ago | (#3920455)

Apache needs real per connection, per user, and per IP rate limiting. mod_throttle and everything else I've seen has to starve connections after they perform too well. How about something that hard limits connections to 2mbps/sec. I will pay for anything that can do that for Apache today...

Then head for eBay, because a moderate-cost solution to your particular problem (limiting all web traffic to 2 megabits/s) is available for two bids and some cable work: buy two Ascend Pipeline 130s and run them back-to-back with a T1 cross-over cable. Another advantage of this solution is that your web server can be located near the webmaster, up to 5000 feet (without repeaters) from your network access point. Indeed, if you partition all of your services (mail, news, web server, ftp server) then no one service can completely swamp your connection.

Don't like using T1 routers? Then get a moderately powerful Intel computer, install enough Ethernet interfaces to satisfy your needs, load up a modern Linux distribution with 2.4.18 kernal and IPTABLES, and set up rules that will traffic-limit to the interface to which you connect your Web server. If you are like a lot of people who run multiple servers on the same box, the rules can "customize" the throttling by service. Not only that, but you can throttle by direction as well: incoming HTTP could be limited to 30 kilobits/s while outbound HTTP could be limited to 3 megabits/s -- that takes care of some of the problems with DoS attempts on HTTP. The same can be done for other services, such as FTP, mail, and IRC. The amount of control that IPTABLES provides is, well, interesting.

(Yes, I know that the *BSD people have something similar, but I know the IPTABLES stuff better and have seen it work.)

C'mon, people, this isn't all that hard to do if you think and are willing to put a little money where your wishes are.

Re:You're missing the point (0)

Anonymous Coward | more than 12 years ago | (#3921096)

you want the TBF queue in linux QoS...

Re:You're missing the point (2)

Electrum (94638) | more than 12 years ago | (#3922109)

I am having the same problem, and I think you guys are missing the point. He said 4GB an hour, which means he probably has an OC-3, OC-12, or Gigabit Ethernet connection.

That's only 9.1 mbps. T1 = 1.544, T3 = 44.736, OC1 = 51.84. OCx = OC1 * x.

heh... (2)

keepper (24317) | more than 12 years ago | (#3925372)

how much are you going to pay me then?

Using ipfw and dummynet on freebsd is the way
I have gone in a VERY high traffic hosting
and colo company.

you can not only simultae a link of a certain speed, but can also limit any ip that hits a certain destination to a max speed...

:-P

I'd love to help (0)

Anonymous Coward | more than 12 years ago | (#3919497)


You know I'd love to help... but I can't check out your server's particulars 'cos you didn't include a URL ;-)

FreeBSD Traffic Shaping (1)

DiSKiLLeR (17651) | more than 12 years ago | (#3919922)

Have you considered using FreeBSD Traffic Shaping? ("man ipfw").

here [onlamp.com] is a story to a problem that sounds identical to yours. A hosting company (using a virtual host) has a customer who uses exessive bandwidth, and they wish to throttle it. After trying mod_throttle, they went with a better solution.

If your not using FreeBSD, i am very suprised. Perhaps you should look into it.

D.

Use another web server? (0)

Anonymous Coward | more than 12 years ago | (#3920612)

thttpd does URL-based bandwidth throttling.

So, you could say "xyz.dom may only use 200kbps for *.mpg files".

thttpd is especially suited for serving static files; it is not an all purpose machine such as Apache.

You can find out more information at thttpd's homepage [acme.com] .

QoS with routers/switches can do the trick. (1)

Mordant (138460) | more than 12 years ago | (#3920657)

See http://www.cisco.com/warp/public/105/policevsshape .html

for a good tutorial on Traffic Policing and Traffic Shaping, two ways of doing what you require with Cisco hardware.

CAR in Cisco Routers (1)

Sandman1971 (516283) | more than 12 years ago | (#3921042)

Cisco has a great IOS feature called CAR that can do exactly what you're asking for at the router level. You can rate-limit specific physical ports on the router (even using a schedule such as from 8am to 8pm, allow anything, from 8pm to 8am throttle to xxx kbytes/second).

This is assuming that you're not running virtual hosting (multiple domains sharing one IP address), in which case all customers on that IP/physical port would be affected by the CAR limitations you would impose. It is possible with the amount of traffic you're talking about. Just make sure that the puppy has a good processor and plenty of RAM.

Use Zeus (4, Insightful)

Electrum (94638) | more than 12 years ago | (#3922070)

High traffic and Apache is almost an oxymoron. If you are running a high traffic web hosting company, then you need to stop playing games and use Zeus. Apache has its strong points, like being free and open source, but that's about it. If Zeus was free, then it wouldn't just be the best web server for UNIX platforms, it would also be the most popular.

You want Zeus because it is high performance (it doesn't use the toy process-per-connection model). It comes with an easy to use, powerful web based GUI. The GUI doesn't just hold your hand. It lets you set everything, and then will show you the exact lines that are changing in the config files.

It doesn't use extremely complex format for config files that Apache uses. A good comparison is BIND and djbdns. Do you want to try and deal with the incredibly complex BIND zone files, or the simple, one record per line data files that djbdns uses? Zeus config files are one record per line of the form "modules!throttle!enabled yes". It also comes with tools that let you do everything from scripts. But only if you want to. Otherwise, use the GUI.

And speaking of throttling, Zeus does it correctly, unlike any other web server (at least any of the freely available UNIX ones, as that is all I am familar with). It will let you set a limit on the number of users, or set a max number of bytes per second on a virtual server or subserver level. It doesn't serve some people at max speed and then start dropping connections (mod_throttle) or set the throttle speed at the beginning of the request, then start dropping connections (thttpd).

Virtual servers in Zeus actually make sense. There is no master server configuration like in Apache. Instead, you create one or more virtual servers. As such, each virtual server has its own separate configuration. Virtual servers can serve a single website, or any number of websites, via subservers. Subservers all share the configuration of the virtual server (kind of like Apache's mass virtual hosting only much better). No more restarting the server to add a site. Simply create the directory, and it starts serving the site.

There are plenty of other reasons why Zeus is superior to Apache, but the ones I listed should be enough to start considering it. No, I don't work for Zeus or own stock (don't think they have any) or anything like that. I'm just a satisfied customer.

For some things, Apache works just fine. But for anything high traffic, requires throttling or needs a flexible or scripted configuration, Zeus beats Apache hands down. It's worth every penny. Check it out. I doubt you'll be dissapointed.

(subconscious message to Apache developers: stop being lazy and make Apache more like Zeus!)

Re:Use Zeus (1)

gibmichaels (465902) | more than 12 years ago | (#3925364)

Thanks for the actual good reply to this - I wish there was a way to do it with Apache - I hope our developers' scripts mod well to work with Zeus ;) We have been looking at it for a long time, but we have to make a case for it at work.

Are the Slashdot readers this ignorant? Everyone else suggested QoS methods that would do nothing to help *per* user connections. Are people really this obtuse? The first poster and I were very clear about we wanted to do, and people came up with pretty lame stuff that was way off the mark.

The problem with the IT industry is that there are so many clueless people that have the "experience", and make good money. They dilute the talent and make it hard for a real wiz to make money anymore. How many people do you know that fall in the category: "Knows enough to be dangerous"?

Re:Use Zeus (1)

funky womble (518255) | more than 12 years ago | (#3929360)

The Squid [nlanr.net] +delay-pools [squid-cache.org] someone suggested maybe viable as well (or there's Oops [paco.net] , another web cache which can run in reverse mode which does bandwidth limitation, I usually prefer it over Squid but haven't tried pushing it particularly hard).

Zeus really is great, it has some wonderful clustering features too, admin for the whole cluster can be done from one place. At the very least it's worth taking a look at the 30-day trial version to get an idea for how much work it would be to port the scripts across.

On a large site, you'll quite likely save the license cost by the decreased use of resources.

(AOLServer [aolserver.com] is a good server too, though it doesn't have the nice admin of Zeus there's a lot it can do and is also very efficient. I'm not sure whether it can throttle bandwidth by itself though).

netstat gives the data (1)

thogard (43403) | more than 12 years ago | (#3926152)

how about a script that goes through the output of netstat every 5 minutes and adds entries to a table. If that table shows its "interesting" traffic, then nail it with something like ipfilter or just set it to a null route. In the case of a dedicated hosted serer, stick in another ethernet card and route all the funny trafic to it and let the switch or router set it to something slow. Its amazing what a perl script, a setuid wrapper for route and a 10mb ethernet card will do.

use ipfw hen? (0)

Anonymous Coward | more than 12 years ago | (#3995966)

I would use ipfw to limit bandwith on that port. Everything can be done with the pipe option.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>