×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Apache Module For Fending Off DoS Attacks

Hemos posted more than 11 years ago | from the beath-them-down dept.

The Internet 62

Network Dweebs Corporation writes "A new Apache DoS mod, called mod_dosevasive (short for dos evasive maneuvers) is now available for Apache 1.3. This new module gives Apache the ability to deny (403) web page retrieval from clients requesting more than one or two pages per second, and helps protect bandwidth and system resources in the event of a single-system or distributed request-based DoS attack. This freely distributable, open-source mod can be found at http://www.networkdweebs.com/stuff/security.html"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

62 comments

just one question... (0, Troll)

pgilman (96092) | more than 11 years ago | (#4563349)


how's this going to affect my porn wgets? ;-)

Wget and pageview rate throttles (1)

yerricde (125198) | more than 11 years ago | (#4570009)

how's this going to affect my porn wgets?

From the GNU Wget help page:

GNU Wget 1.5.3.1, a non-interactive network retriever.

Usage: wget [OPTION]... [URL]...

Download:
-w, --wait=SECONDS : wait SECONDS between retrievals.

Thus, you can still wget as many images as you want. You'll just have to speciy the -w option and (so you don't waste any online time) possibly read Slashdot while the image download proceeds.

Re:just one question... (2)

macdaddy (38372) | more than 11 years ago | (#4578029)

This was exactly what I was thinking. How is this going to affect w3mir, WebWhacker (on Windows and Mac), or WebDevil (on Mac)?

DSO? (-1, Troll)

roly (576035) | more than 11 years ago | (#4563358)

It only has guides on compiling it into Apache, what about DSO?

Re:DSO? (0)

Anonymous Coward | more than 11 years ago | (#4576229)

Moderation Totals: Troll=1, Total=1.

How is this a troll? Clueless, maybe.

See you in M2.

Bandwidth still being used (2, Insightful)

Green Light (32766) | more than 11 years ago | (#4563378)

Handling all of those requests still takes processing time and bandwidth. What is needed is some type of hardware "filter" out front that can recognize a DoS attack and throw packets away.

Re:Bandwidth still being used (2, Insightful)

Gadzinka (256729) | more than 11 years ago | (#4563509)

Problem is, that this aproach doesn't solve any problems, creates new ones and is a great DoS tool in itself.

This is the same problem as with all filters automagically cutting off all requests from given ip/netblock after spotting some abuse.

Think big LAN behind masquerading firewall, or caching proxy for large organization, where one person using it can block access to the site using this automatic defenses.

Funny thing is that this broken-by-design solution is known for years, its flaws are known for years, and yet we see every once in a while another tool using this scheme.

Robert

Re:Bandwidth still being used (2, Insightful)

Gadzinka (256729) | more than 11 years ago | (#4563570)

(yeah,
  1. write
  2. preview
  3. post
  4. think
  5. reply to you own post
;)

Think big LAN behind masquerading firewall, or caching proxy for large organization, where one person using it can block access to the site using this automatic defenses.

Or think impostor sending requests with forged source IP.

What? TCP sequence numbers? Impossible to impersonate TCP session?

Think [bindview.com] again [coredump.cx] .

Robert

Re:Bandwidth still being used (2, Informative)

Anonymous Coward | more than 11 years ago | (#4563663)

The website says: "Obviously, this module will not fend off attacks consuming all available bandwidth or more resources than are available to send 403's, but is very successful in typical flood attacks or cgi flood attacks."

This tool wasn't designed as an end-all be-all solution, it was designed as a starting point for cutting off extraneous requests (so you don't have a few thousand CGIs running on your server, or a few thousand page sends) and to provide a means of detection. You could easily take this code and have it talk to your firewalls to shut down the ip addresses that are being blacklisted. If you don't have decentralized content or at the very least a distributed design, you're going to be DoS'd regardless, but this tool can at least make it take more power to do it.

Re:Bandwidth still being used (1)

Kindaian (577374) | more than 11 years ago | (#4631828)

It is all a question of scale...

The hardware devices that you propose already exist. And they work to some extend.

The problem is bigger the most would think. What does diferenciate a attack from a legitim access? How do you detect an attack and start to counter it? Do you have bandwidth to withold even a pit bucket for the attacking packets?

And finally how much money are you investing in the DoS protection...

The apache module have as normal a very interisting cost/effectiveness ratio... [even if there are other more efficient solutions for the DoS problems - they are also very expensive].

Cheers...

Re:Bandwidth still being used (0)

Anonymous Coward | more than 11 years ago | (#4641836)

But this is easy to install and free - hardware wouldn't be - and we need as many web admins as possible to be running this kind of thing to squash dos and ddos attacks.

Re:Bandwidth still being used (0)

Anonymous Coward | more than 11 years ago | (#4648672)

Yet those packets are still getting to this "filter" and are still eating bandwidth. This is a fundamental problem of the Internet and the only way to stop it is to block upstream. But again, bandwidth is still being used up to the point before the block.

The ultimate bandwidth, eternal life! (0)

Anonymous Coward | more than 11 years ago | (#4666242)

9. That if thou shalt confess with thy mouth the Lord Jesus, and shalt believe in thine heart that God hath raised him from the dead, thou shalt be saved. 10. For with the heart man believeth unto righteousness; and with the mouth confession is made unto salvation. 11. For the scripture saith, Whosoever believeth on him shall not be ashamed. 12. For there is no difference between the Jew and the Greek: for the same Lord over all is rich unto all that call upon him. 13. For whosoever shall call upon the name of the Lord shall be saved.

9. What then? are we better than they? No, in no wise: for we have before proved both Jews and Gentiles, that they are all under sin; 10. As it is written, There is none righteous, no, not one: 11. There is none that understandeth, there is none that seeketh after God. 12. They are all gone out of the way, they are together become unprofitable; there is none that doeth good, no, not one. 13. Their throat is an open sepulchre; with their tongues they have used deceit; the poison of asps is under their lips: 14. Whose mouth is full of cursing and bitterness: 15. Their feet are swift to shed blood: 16. Destruction and misery are in their ways: 17. And the way of peace have they not known: 18. There is no fear of God before their eyes. 19. Now we know that what things soever the law saith, it saith to them who are under the law: that every mouth may be stopped, and all the world may become guilty before God. 20. Therefore by the deeds of the law there shall no flesh be justified in his sight: for by the law is the knowledge of sin. 21. But now the righteousness of God without the law is manifested, being witnessed by the law and the prophets; 22. Even the righteousness of God which is by faith of Jesus Christ unto all and upon all them that believe: for there is no difference: 23. For all have sinned, and come short of the glory of God; 24. Being justified freely by his grace through the redemption that is in Christ Jesus: 25. Whom God hath set forth to be a propitiation through faith in his blood, to declare his righteousness for the remission of sins that are past, through the forbearance of God; 26. To declare, I say, at this time his righteousness: that he might be just, and the justifier of him which believeth in Jesus. 27. Where is boasting then? It is excluded. By what law? of works? Nay: but by the law of faith. 28. Therefore we conclude that a man is justified by faith without the deeds of the law. 29. Is he the God of the Jews only? is he not also of the Gentiles? Yes, of the Gentiles also: 30. Seeing it is one God, which shall justify the circumcision by faith, and uncircumcision through faith. 31. Do we then make void the law through faith? God forbid: yea, we establish the law.

8. For by grace are ye saved through faith; and that not of yourselves: it is the gift of God: 9. Not of works, lest any man should boast. 10. For we are his workmanship, created in Christ Jesus unto good works, which God hath before ordained that we should walk in them. 12. For as many as have sinned without law shall also perish without law: and as many as have sinned in the law shall be judged by the law; 13. (For not the hearers of the law are just before God, but the doers of the law shall be justified. 14. For when the Gentiles, which have not the law, do by nature the things contained in the law, these, having not the law, are a law unto themselves: 15. Which shew the work of the law written in their hearts, their conscience also bearing witness, and their thoughts the mean while accusing or else excusing one another;) 16. In the day when God shall judge the secrets of men by Jesus Christ according to my gospel..

How clever is it? (2, Insightful)

cilix (538057) | more than 11 years ago | (#4563486)

Does anyone know how clever it is? There are several things that I suppose
you could do to make sure that this doesn't get in the way of normal browsing, but still catches DOS attacks. What sort of things does this module include to work intelligently? How tunable is it?

One thing that jumps to mind is that you could have some kind of ratio between images and html which has to be adhered to for any x second period. This would hopefully mean that going to webpages with lots of images (which are all requested really quickly) wouldn't cause any problems. Also, more than one request can be made in a single http session (I think - I don't really know anything about this) so I guess you could make use of that to assess whether the traffic fitted the normal profile of a websurfer for that particular site.

Also, is there anything you can do to ensure that several people behind a NATing firewall all surfing to the same site don't trip the anti-DOS features?

Just thinking while I type really...

Re:How clever is it? (4, Insightful)

The Whinger (255233) | more than 11 years ago | (#4564006)

"Also, is there anything you can do to ensure that several people behind a NATing firewall all surfing to the same site don't trip the anti-DOS features?"

Whilst not totally impossible ... the chances of this are SMALL. Same URI same minute ... possible, same URI same second ... rare I guess ...

Re:How clever is it? (0)

Anonymous Coward | more than 11 years ago | (#4599275)

All of 25M AOL users go throu one giant NAT. Now think again.

Re:How clever is it? (0)

Anonymous Coward | more than 11 years ago | (#4614929)

Um, Fuck AOL?

Re:How clever is it? (1)

rilian4 (591569) | more than 11 years ago | (#4629583)

here's one realistic scenario that could be seen incorrectly as a DoS attack...

Setup: You are teaching classes to a lab full(lets say 30 for the sake of discussion)of kids in a school setting(gee, ya wonder where I work?). Let's say you instruct all your kids to go to some site with material for the astronomy class you teach. Let's assume that all the kids do as they are told and they all immediately type in the URL you gave them and request a page.

Let's assume your school district is behind a firewall that also uses a NAT/Proxy setup. Therefore all the requests are coming en masse from one "real" IP. Wouldn't this possibly be deemed as a DoS attack by this plugin?

.....

Not necesarily (0)

Anonymous Coward | more than 11 years ago | (#4631692)

It wouldn't be a problem if the proxy was a chaching proxy. ie. only the first hit would propogate through the proxy.

Re:How clever is it? (1)

Luke-Jr (574047) | more than 11 years ago | (#4640141)

Not if the site is linked to from Slashdot... But then again, the site will be /.ed soon enough so it probably doesn't matter if it appears to happen a few seconds early....

Re:How clever is it? (2)

elvum (9344) | more than 11 years ago | (#4626911)

One thing that jumps to mind is that you could have some kind of ratio between images and html which has to be adhered to for any x second period.

lynx users wouldn't be too impressed.

The "why" behind this.. (5, Informative)

GigsVT (208848) | more than 11 years ago | (#4563583)

On the securityfocus incidents list, there was a guy that ran a little web site that was being DoSed by a competitor in a strange way. The much higher traffic competitor had a bunch of 1 pixel by 1 pixel frames and each one loaded a copy of the little guy's site. The effect was he was using his own users to DoS his competition.

People suggessted a javascript popup telling them the truth about what was going on, or an HTTP redirect to a very large file on the big guy's site, but Jonathan A. Zdziarski at the site linked above decided to write this patch as an ad-hoc solution.

I'd be very careful with this patch in production, as it is ad-hoc and not tested very much at all.

Re:The "why" behind this.. (2, Interesting)

dondelelcaro (81997) | more than 11 years ago | (#4565033)

The much higher traffic competitor had a bunch of 1 pixel by 1 pixel frames and each one loaded a copy of the little guy's site. The effect was he was using his own users to DoS his competition.
One wonders why he didn't just use some javascript to break out of the frame jail, and then explain that users had been redirected to foo because bar was loading foo's pages? [Granted, it would have been caught eventually, but for the time being, legitimate traffic might win you a few customers...]

Re:The "why" behind this.. (3, Insightful)

HiredMan (5546) | more than 11 years ago | (#4568448)

One wonders why he didn't just use some javascript to break out of the frame jail, and then explain that users had been redirected to foo because bar was loading foo's pages?


Or break out and redirect to a goatse-esque page or something similar... Since they're viewing his competitor's site it would appear to be his content right?


=tkk

Referer check revenge? (1)

phorm (591458) | more than 11 years ago | (#4573548)

How about just something with a referer check? If the referer is the other guy's site, do a: window.open("www.somedirtypornsite.com", _top);

Re:The "why" behind this.. (1)

ignorant_newbie (104175) | more than 11 years ago | (#4567014)

I'll go read the security focus list, but i'm wondering why he didn't fix this by checking the referer tag?

Re:The "why" behind this.. (1)

GigsVT (208848) | more than 11 years ago | (#4567180)

That was one suggestion, but it would still cause the web server to have to handle the requests.

Re:The "why" behind this.. (1)

ignorant_newbie (104175) | more than 11 years ago | (#4567241)

right, but since this functionality is already there, i thought it might be lighter than the new mod - which has to maintain a list of requests (either in memory on the fs) and then check this list every time a request comes in... I wonder what kind of IO is involved here. But, that question is better answered in the source code, so off i go...

Re:The "why" behind this.. (1)

GigsVT (208848) | more than 11 years ago | (#4567628)

Well, this patch is a little more generalized too, it throttles any IP that accesses the site too quickly... Something like this would have probably throttled nimda to some extent also, and misbehaved robots that really slam your site.

Re:The "why" behind this.. (0)

Anonymous Coward | more than 11 years ago | (#4579996)

of course, the best solution to this is to set whatever it is he's trying to load into the 1x1 image w/ password protection. so the webuser just gets a bunch of user/pass pop-ups.

if it's to his homepage, maybe he can redirect based on referrer (not perfect, but it might help).

simple (2, Interesting)

krappie (172561) | more than 11 years ago | (#4629950)

I work as tech support for a webhosting company. I see things like this all the time. People tend to think its impossible to block because its not from any one specific ip address, but the requests are coming from all over. People need to learn the awesome power of mod_rewrite.

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://(.+\.)*bigguysite.com/ [NC]
RewriteRule /* - [F]

I've also seen people who had bad domain names pointed at their ips, where you can check the HTTP_HOST. I've seen recursive download programs totally crush webservers, mod_rewrite can check the HTTP_USER_AGENT for that. Of course, download programs could always change the specified user agent, which is I guess where this apache module could come in handy. Good idea..

Re:simple (1)

krappie (172561) | more than 11 years ago | (#4630008)

Other examples.. I've seen one random picture on a guy's server get linked to from thehun.net. It ended up getting over 2 million requests a day and totally killed his server.

I also like to keep any interesting multimedia files up on a shared directory accessible from apache running on my home computer. Just so any of my friends can browse through and such. Eventually, I got listed on some warez search engines...

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://(.+\.)*warezsite.com/ [NC]
RewriteRule /* http://goatse.cx/ [L,R]

Teehee. I got removed pretty quickly.

In the case of the 1x1 frames on every page... I wonder what would happen if you redirected them back to the origional page, which would have a frame that would redirect them back to the origional page.. I guess browsers probably protect against recursive frames.

You could at least redirect their browsers back to the most resource intensive page or script on the big guy's site, at least doubling his resources while barely using yours. Ah.. sweet justice.

I like someone else's suggestion about frame-busting javascript, that'd be pretty interesting and would definantly get that frame removed right away. I sometimes wish my websites got these kind of attacks, I'd have so much fun :D

Re:simple (1)

GigsVT (208848) | more than 11 years ago | (#4630050)

I guess browsers probably protect against recursive frames.

Sorta, though not deliberately, they are limited to something between 4-6 levels of nesting I believe... Same with nested tables.

Too slow/too fast. (3, Insightful)

perlyking (198166) | more than 11 years ago | (#4563732)

"This new module gives Apache the ability to deny (403) web page retrieval from clients requesting more than one or two pages per second."

I can easily request a couple of pages a second, if i'm spawning off links to read in the background. On the other hand wouldnt an automated attack be requesting much faster than 2 per second?

Re:Too slow/too fast. (1)

The Whinger (255233) | more than 11 years ago | (#4563954)

"I can easily request a couple of pages a second, if i'm spawning off links to read in the background. On the other hand wouldnt an automated attack be requesting much faster than 2 per second?"

Why would you spawn off links to the same page? Do you read the same content more than once? The key to the article is "the SAME page in the 2 second period".

Re:Too slow/too fast. (2)

GoRK (10018) | more than 11 years ago | (#4565249)

Yeah, if the page is a script that gives out different content based on some parameter, you could easily do this. I would imagine that the module lets you *configure it*.. Gee, imagine being able to change a parameter?!?!

~GoRK

Re:Too slow/too fast. (-1, Flamebait)

Anonymous Coward | more than 11 years ago | (#4591676)

Gee, also imagine being able to change the code since it is open source.

A possible problem? (3, Interesting)

n-baxley (103975) | more than 11 years ago | (#4563793)

I'm sure they've thought of this, but will this affect frame pages where the browser requests multiple pages at the same time? How about scripting and stylesheet includes which are made as seperate requests, usually right on the heels of the original page? I hope they've handled this. It seems like the number should be set higher. Maybe 10 requests a second is a better point. That's probably adjustable though. I suppose I should RTFM.

Re:A possible problem? (1, Informative)

Anonymous Coward | more than 11 years ago | (#4563875)

It's not based on the # of requests it's based on the # of requests to the same URI. It'll only blacklist you if you request the same file more than twice per second. Once you're blacklisted you can't retrieve ANY files for 10 seconds (or longer if you keep trying to retrieve files) but the only way you're going to get on the blacklist would be if all those frames were for the same page or script.

Re:A possible problem? (2, Insightful)

spacefight (577141) | more than 11 years ago | (#4577015)

if all those frames were for the same page or script.
Some silly designers uses to have multiple frames of a blank frame, eg blank.html. These all would be busted. I do not think that you should use this new module in production, do you?

Misunderstanding about Module (5, Informative)

NetworkDweebs (621769) | more than 11 years ago | (#4563987)

Hi there,

Just wanted to clear up a bit of misunderstanding about this module. First off, please forgive me for screwing up the story submission. What it *should* have said was "...This new module gives Apache the ability to deny (403) web page retrieval from clients requesting THE SAME FILES more than once or twice per second...". That's the way this tool works; if you request the same file more than once or twice per second, it adds you to a blacklist which prevents you from getting any web pages for 10 seconds; if you try and request more pages, it adds to that 10 seconds.

Second, I'd like to address the idea that we designed this as the "ultimate solution to DoSes". This tool should help in the event of your average DoS attack, however to be successful in heavy distributed attacks, you'll need to have an infrastructure capable of handling such an attack. A web server can only handle so many 403's before it'll stop servicing valid requests (but the # of 403's it can handle as opposed to web page or script retrievals is greater). It's our hope that anyone serious enough about circumventing a DoS attack will also have a distributed model and decentralized content, along with a network built for resisting DoS attacks.

This tool is not only useful for providing some initial frontline defense, but can (should) also be adapted to talk directly to a company's border routers or firewalls so that the blacklisted IPs can be handled before any more requests get to the server; e.g. it's a great detection tool for web-based DoS attacks.

Anyhow, please enjoy the tool, and I'd be very interested in hearing what kind of private adaptations people have made to it to talk to other requipment on the network.

Re:Misunderstanding about Module (1, Interesting)

hfastedge (542013) | more than 11 years ago | (#4565340)

Heres a simple hack to your service: simply get 10 or so files from the server, and use your scripts to randomely fetch all 10...or 100, or 1000.

Re:Misunderstanding about Module (3, Informative)

NetworkDweebs (621769) | more than 11 years ago | (#4567375)

Funny you should mention that. We released version 1.3 on the site that now has a separate threshhold for total hits per child per second. The default is 50 objects per child per second. Even if you have a large site and a fast client connection, a browser is going to open up four or more concurrent connections splitting the total number of objects up. Nevertheless if 50 is still too low you can always adjust it.

What about wget-style attacks? (2)

hearingaid (216439) | more than 11 years ago | (#4579021)

Run a wget -r type of attack (only dump the resulting files into /dev/null). This module would seem to have no effect.

What I'd like to see... (1)

myowntrueself (607117) | more than 11 years ago | (#4597114)

is blocking anyone who requests NIMDA/CodeRed related URLs.

I currently use a scheme where I created the appropriate directories on my web document tree
(/scripts for example)
and then set up 'deny to all' rules for them.

This way, the apache server doesn't even bother with a filesystem seek to tell that the file isn't there it just denies it.

Dropping packets would be even better.

Re:Misunderstanding about Module (1)

SEWilco (27983) | more than 11 years ago | (#4630952)

"...deny (403) web page retrieval from clients requesting THE SAME FILES more than once or twice per second."

If your logo is at the top and the bottom of the page, that's two references within a second. But if the browser is caching images, there will only be one request to the web server. So in practice that shouldn't be a problem...unless the browser checks if the image file changed for the second reference?

Border Router Blocking (1)

rtp (49744) | more than 11 years ago | (#4648186)

If you're looking for an easy way to automate blocking at the border router, take a look at:

http://www.ipblocker.org [ipblocker.org]

With a simple command line call to a Perl script you can have the ACL on a Cisco router updated to deny traffic from the offending user.

But thats not the real problem right? (1)

mary_will_grow (466638) | more than 11 years ago | (#4564225)

Now, I am going to start off my admitting I have never taken any classes on TCP/IP and only have a user's level of understanding. Now I can see an attack by making a web server dump its data too you so often that it cant keep up w/ everyone else as being effective if it doesnt have any sort of client balancing, or whatever. But I thought that DoS proper involved looking at the connection at a lower level, where you would fill the TCP handler's queue with requests that would never get past a certain point, so the server would have a ton of socket connections waiting to be completed, handshaked, whatever happens (So many in fact that it's queue was completely full and could not even open a socket connection to any more users to even give the "403" error message.) Thats why it was called "Denial of Service" because valid clients would not even get a SLOW response from the server, they would get nothing because their TCP/IP connections would never even be opened. Isnt that right?

Re:But thats not the real problem right? (2, Informative)

NetworkDweebs (621769) | more than 11 years ago | (#4564350)

There are many different types of DoS attacks, and the kind you're describing have other methods of circumvention. The type of DoS this module was designed to fight/detect was a request-based attack where a website was flooded with requests to increase bandwidth and system load.

Re:But thats not the real problem right? (0)

Anonymous Coward | more than 11 years ago | (#4582614)

This sounds like a SYN attack that leaves incoming tcp setup's half-opened.

some (most?) operating systems have methods to counteract the effects of half-open connections.

FreeBSD 4.4+ also has an http_accept_filter that makes the kernel handle the tcp connection, and can wait for a full HTTP request to be made, before passing it on into user space apache.

This helps prevent a half-open attack from chewing up process table entries and other valuable resources.

Re:But thats not the real problem right? (2)

jonadab (583620) | more than 11 years ago | (#4627511)

To stop a non-bandwidth bogus-request attack, you just turn on syncookies and that's that. This module is designed to stop a different kind of attack, wherein the clients are completing entire transactions too many times and thus consuming your bandwidth. There are other types of DOS attacks too -- reflection attacks (where you get a ton of ACK packets from all over the internet, using up all your bandwidth), for example, have to be stopped at the router level upstream, which prevents the server from completing any transactions as a client (over the internet; it can still get through over the LAN, of course).

Terrible Idea... What about NAT? (0)

Anonymous Coward | more than 11 years ago | (#4565399)

What if 3-4 people wanted to surf into a site that was NAT'd???

This would deny them.

Re:Terrible Idea... What about NAT? (1)

NetworkDweebs (621769) | more than 11 years ago | (#4565973)

Remember these rules are instantiated on a per-listener basis, so you have to have not only the same IP requesting the same page but the same client, since just about any current browser has a keep-alive. So if three people hit the server simultaneously, they would get 3 different listeners. The only way it'd deny them is if it hit the server one-after-the-other after each client disconnected and if it happened to hit the same child. This is extremely unlikely.

Re:Terrible Idea... What about NAT? (1)

NetworkDweebs (621769) | more than 11 years ago | (#4566008)

oh..and that would have to happen within 1 second's time. Apache's keep-alive defaults to 15 seconds, so unless each browser is requesting a "Connection: close" it's going to be impossible.

Re:Terrible Idea... What about NAT? (0)

Anonymous Coward | more than 11 years ago | (#4626445)

but if the intent is to to a DoS, then of course the DoS-ing client is going to close the connection after each request.

After all, the whole point of keepalive is to improve performance. A DoSer doesn't want improved performance.

Speaking of Security-related Apache Modules (1, Interesting)

Anonymous Coward | more than 11 years ago | (#4569812)

A while back I wrote an Apache module similar to this one (mod_antihak), but it protected against CodeRed bandwidth consumption. It also had a slightly more brutal method of blocking offenders: ipchains :) There's inherant problems with this though, the 403 would be the way I would go too if I did it all again.

i sure hope (0)

Anonymous Coward | more than 11 years ago | (#4571844)

that it allows HTTP_REFERER values of /. thru

At last!!! (0)

Anonymous Coward | more than 11 years ago | (#4575583)

The end of frames since that would violate the one or two pages per second.

This is cool, but, mod_bandwidth already does it (2)

hillct (230132) | more than 11 years ago | (#4578398)

I'm not sure how this is any different than the feature of mod_bandwidth that limits the number of requests per user per second. I'm definately going to test it out it's unclear how this is any different, except that it doesn't have all the other overhead functionality of mod_bandwidth.

--CTH

mod_slashdot? (1)

squidinkcalligraphy (558677) | more than 11 years ago | (#4636070)

Perhaps one of these is needed to ward off the ... effect? I suppose it would be damn easy to do; it just needs to be in the config by default.

DOS and Design of Websites (2)

hackus (159037) | more than 11 years ago | (#4639846)

If you design web sites pay attention.

So many designers that I ran into in my travels, still don't understand, that when you put Flash animations (Which I can't stand 99% of the time), large png files, or complex front pages, especially public pages, you increase you bandwidth costs.

Seems very simple to most. I am still surprised how many companies redesign sites, with gaudy graphics all over the place, and then find ALL OF A SUDDEN after deployment thier website goes down.

I can remember many customers I use to deal with, that had fixed contracts for hosting, yet they maintained thier own content, calling up and claiming our server was slow, and or down/experiencing technical difficulties.

I would usually say: "OH REALLY, I don't see any problems with the server per se. Did you happen to modify anything lately on the site?"

"Yes" they would reply: "We just put a flash movie movie on the front page..."

Immediately I knew what the problem is, they blew thier bandwidth budget. At times I would see companies quadruple the size of thier front pages, which reduces by about a quarter the number of users they can support at quality page download times. Especially if they are close to thier bandwidth limit as IS without the new pages.

The bigger the pages, the better the DOS or the easier the DOS is too perform.

In my design philosophy for my companies site, you can't get access to big pages without signing in first. If you sign in a zillion times at one or more pages, obviously that isn't normal behavior, and the software on my site is intelligent enough to figure that pout and disables the login, which then points you to a 2K Error page.

In any case, if you are trying to protect your website and you don't want to resort to highly technical and esoteric methods, to minimize DOS attacks. You might want to start with the design of the website content.

The lighter the weight of the pages, the harder it is for an individual to amass enough machines to prevent legitimate users from using your site.

IMHO, Flash plugins, and applets and other such features should be available only to registered users, and logins strictly controlled.

Hack
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...