Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Botnet Targets Web Sites With Junk SSL Connections

kdawson posted more than 4 years ago | from the look-over-there dept.

Security 64

angry tapir writes "More than 300 Web sites are being pestered by infected computers that are part of the Pushdo botnet. The FBI, Twitter, and PayPal are among the sites being hit, although it doesn't appear the attacks are designed to knock the sites offline. Pushdo appears to have been recently updated to cause computers infected with it to make SSL connections to various Web sites — the bots start to create an SSL connection, disconnect, and then repeat." SecureWorks's Joe Stewart theorizes that this behavior is designed to obscure Pushdo's command and control in a flurry of bogus SSL traffic.

Sorry! There are no comments related to the filter you selected.

oh shit! (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30991010)

I need to digg this fast! If only there was a digg button!!!!

Re:oh shit! (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#30992002)

If you RTFA, you can Digg that directly.

Adblock to the Rescue (-1)

Anonymous Coward | more than 4 years ago | (#30992164)

i know, right! i mean shit, if only there was a twitter button, and maybe a facebook button too. Got to be trendy after all! I need those fast too. Oh shit, that's right, I Adblocked them!!

"we don't really like the PAINFUL FUCKING TRUTH, so you must wait a little bit before using this resource; please try again later."

nginx to the rescue? (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30991042)

Sounds like they need to use a web server that can easily handle such a load, even if they're junk requests. What is that web server? Why, it's nginx [nginx.org] of course!

Re:nginx to the rescue? (1, Insightful)

JWSmythe (446288) | more than 4 years ago | (#30991336)

    It sounds like some pretty old fashion DoS/DDoS attacks. What's so fancy about initiating multiple requests, and leaving them hanging? Folks have been tuning up their http servers to handle this for years. Why can't they tune up their https side too, other than the admins being lazy or inept?

Re:nginx to the rescue? (2, Insightful)

lastomega7 (1060398) | more than 4 years ago | (#30991356)

I don't think the point is for denial of service. If all the nodes on the botnet send out requests that are indistinguishable from a command from the botnet controller it makes for a nice cloaking shield for the command center.

Re:nginx to the rescue? (4, Insightful)

JWSmythe (446288) | more than 4 years ago | (#30991490)

    Not really.

    I've had to parse logs for similar things. Thousands of requests hit a particular exploitable web page, but only one or two IP's are sending further information. It's easy to trim it down the list of candidates, and find who the real problem is.

    That's what the feds do in any investigation. They have a broad list of suspects. They eliminate folks until they have their persons of interest, and then down to the guy who they'll be convicting.

ftfy (1, Insightful)

Anonymous Coward | more than 4 years ago | (#30991582)

    Not really.

    I've had to parse logs for similar things. Thousands of requests hit a particular exploitable web page, but only one or two IP's are sending further information. It's easy to trim it down the list of candidates, and find who the real problem is.

    That's what the feds do in any investigation. They have a broad list of suspects. They eliminate folks until they have their persons of interest, and then down to the guy who they'll be charging.

Re:ftfy (1)

JWSmythe (446288) | more than 4 years ago | (#30991626)

    Ideally, if they have all the evidence, and they did their jobs right (proper investigative techniques, properly mirandizing the suspect(s), properly requesting subpoenas, and proper handling of the evidence), it will lead to a conviction. I was just leaving out all that pesky middle dialogue. I skipped over plenty of other steps too.

Re:ftfy (1)

pjt33 (739471) | more than 4 years ago | (#30993514)

It's a fairly important distinction though that it's a jury of peers who do the convicting, not the feds.

Re:ftfy (1)

JWSmythe (446288) | more than 4 years ago | (#30994270)

    Well, the feds decide who to prosecute, therefore who they intend to get a conviction against. It usually isn't "Lets charge a bunch of people, and see who we can get a conviction on." At least for a real investigative organization. For folks like the RIAA, they do the blanket "lets sue everyone, and let the courts sort them out" method. The RIAA is not a federal nor law enforcement agency, nor are they looking for a conviction, just a judgement against someone (or everyone).

    [Note: I just use the RIAA as an example. There was no malice intended towards any organization in particular. There are plenty of organizations that do that method in civil court daily.]

Re:ftfy (1)

Runaway1956 (1322357) | more than 4 years ago | (#30995086)

You're right, and you're not quite right. Sit in any courtroom in America, and watch the proceedings for a day or six. Many, many people WAIVE their rights to a trial by jury, and accept a plea bargain.

In some cases - that's bad, because the guy is not guilty, but is either intimidated into accepting the plea, or just can't afford good representation.

In a lot more cases, it's bad, because real dirt bags get sweet deals, and they'll be back on the streets real soon, victimizing more people.

Some cases, I guess it's good. Some people get busted, they KNOW they have done wrong, and decide to accept the plea - and it saves us a lot of money.

But, for the most part, when the feds get hold of you, you're pretty sure of a conviction, whether by jury, or by plea. We just don't see a LOT of people walking after the FBI charges them.

Re:ftfy (0)

Anonymous Coward | more than 4 years ago | (#30992258)

89% of all US Federal defendants are convicted. In fact, the majority of them plead guilty without ever going to trial, 'encouraged' down this path by the Feds' tactic of grossly piling on charges ("Oh, you DDOS'ed a web site? That'll be 9 counts of Unauthorized Use of a Computer, 3 counts of Defrauding a Router, 17 counts of Packet Forgery, etc, etc, ad nauseam -- enough to put you away for several lifetimes. But if you plead guilty, we'll let you off with only 10 years.")

From TFA (0)

JoshuaZ (1134087) | more than 4 years ago | (#30991092)

The strange traffic targeting the Web sites--including sites for the CIA, FBI, PayPal, Yahoo, and Twitter, according to a list at the Shadow Server Foundation--was not enough to cause any outages or slowdowns, said Joe Stewart, director of malware research at SecureWorks.

So this isn't a really big deal. I'm almost tempted to praise the botnet creators for coming up with a good solution to obscuring the command and control issue. It is a good solution to a difficult problem. (Good here being used in the sense of good solution to a puzzle or engineering problem)

Re:From TFA (1)

biryokumaru (822262) | more than 4 years ago | (#30991316)

Good here being used in the sense of good solution to a puzzle or engineering problem

Don't worry. If it doesn't have anything to do with patent laws or copyright, most Slashdotters have fairly lax moral standards. Especially when it comes to computers.

Re:From TFA (3, Funny)

siloko (1133863) | more than 4 years ago | (#30991560)

most Slashdotters have fairly lax moral standards. Especially when it comes to computers.

Yes, essentially we are all evil . . . now where's that kitten? Er, sorry, I meant robotic, remote-controllable kitten with embedded linux firmware!?

Re:From TFA (2, Insightful)

Runaway1956 (1322357) | more than 4 years ago | (#30995152)

Moral standards? What are those? God, I hate obscure standards!!

Oh, wait - didn't Microsoft Embrace, Extend, and Extinguish moral standards years back? It's hard to remember . . .

Re:From TFA (3, Insightful)

Mr. Freeman (933986) | more than 4 years ago | (#30991558)

Why is this such a good solution? Have people forgotten how to parse logs? Shouldn't be that difficult to differentiate a connect/disconnect from a connect, send real data, disconnect.

Re:From TFA (4, Informative)

scdeimos (632778) | more than 4 years ago | (#30991656)

I tend to agree with you that this sort of thing should still be relatively easy to pickup in logs - on proxies as well as the border routers. A lot of people are probably forgetting that SSL through proxies still needs a CONNECT originserver:443 HTTP/1.x request, which gets logged, even if all of the traffic is encrypted on the tunnel after that.

Re:From TFA (2, Insightful)

Anonymous Coward | more than 4 years ago | (#30992790)

Dude, like maybe it doesn't NEED to send anything.
Maybe like, the connections themselves ARE the data.
Whoooaaa.

Re:From TFA (1)

EasyTarget (43516) | more than 4 years ago | (#30993532)

the connections themselves ARE the data

Very good point.

Re:From TFA (4, Interesting)

fm6 (162816) | more than 4 years ago | (#30991812)

Some of the malware I've encountered lately (I've got one system unusable until I get around to reinstalling the OS) is very sophisticated indeed. I would admire the designers, if I didn't so badly want them dead.

Does anybody else miss script kiddies?

What it probably is? (3, Interesting)

Anonymous Coward | more than 4 years ago | (#30991104)

Probably one of a few things
1) They are looking for a particular vuln to make their bot bigger.
2) They are just testing a DOS.
3) They are actually conducting a DOS.
4) They are trying to make some sort of name for themselves.
5) Combination of the above.

My money is mostly on 1, and some sort of bug in the program causing it to spam the same boxes over and over.

SSL traffic (2, Interesting)

shird (566377) | more than 4 years ago | (#30991134)

Do they realise that SSL traffic causes a higher load on the server than a regular request? This would be an indication it is trying to bring the site down.

I don't see how sending packets to 'major websites' disguises the real communications in any way. Just filter those requests. The more 'major' the web site for the garbaage packets, the easier it is to distinguish them from the real packets.

Re:SSL traffic (4, Informative)

drinkypoo (153816) | more than 4 years ago | (#30991238)

Do they realise that SSL traffic causes a higher load on the server than a regular request? This would be an indication it is trying to bring the site down.

Requesting an SSL connection and then never making it takes a lot less load than actually retrieving a page. It doesn't really suggest a takedown attempt, for which there are superior strategies.

Re:SSL traffic (5, Interesting)

girlintraining (1395911) | more than 4 years ago | (#30991408)

Do they realise that SSL traffic causes a higher load on the server than a regular request? This would be an indication it is trying to bring the site down.

Yes, they do. They also don't care. Most botnet authors are self-taught, or only college educated, and are not experienced developers. They don't know how to obscure their creation's activity, because they lack a full understanding of network security. Which is understandable: That isn't in the SDK documentation and example code. Because they lack the skillset necessary to create a protocol resistant to traffic analysis, they go the other way: Flood all the connections and hope those analyzing the logs decide it's not worth the effort to find the needle in the haystack. They know it can be tracked -- they just don't feel its worth the effort to learn how to do it right, when doing it wrong gets them to payday faster and with only a minute amount of additional risk.

Re:SSL traffic (3, Interesting)

Anonymous Coward | more than 4 years ago | (#30991690)

[Citation needed] The guy that took over torpig has some very nice things to say about the quality of the logging info that suggests the complete opposite, botnet developers are damn good and produce a better product than most code-monkeys.

Re:SSL traffic (3, Funny)

JamesP (688957) | more than 4 years ago | (#30993306)

I would bet that on them not being CMMI certified and not writing their viruses in java...

Re:SSL traffic (1)

Rich0 (548339) | more than 4 years ago | (#30995250)

I dunno - what strategy could they possibly employ? They seem rather clever with their attacks IMHO.

Until we have DRM-enabled hardware in everybody's home, they have to work with conventional PCs. That means that an inspection of a PC will turn up the binary code to the virus, and its operation can be fully studied. Anybody attacking a bot can evesdrop on every aspect of its activity client-side, and can probably trace the network traffic pretty far (with government assistance all the way to the endpoint). In such an environment, the only thing you can do is make the job of tracking down the control node harder - you can't ever obscure it completely (at best you can just make every endpoint a control node of some kind and mixmaster all your traffic, which is only a delaying tactic in itself).

Now, once ordinary citizens no longer own their PCs, that might be a botnet's dream. Just imagine trusted code running over trusted network connections protected by trusted routers! DRM suffers from fundamental limitations, but the next gen of bot hunters might find themselves having to tear apart CPUs and examining them with SEMs to try to figure out what they're doing...

Re:SSL traffic (1)

Lord Ender (156273) | more than 4 years ago | (#30996126)

There's a lot of half-truth in your post. Botnet authors have wide ranges of experience and education. Sure, there are self-taught teenagers. But there are also professionals running botnets (on the payroll of the Ukrainian mafia, for example). Cybercrime is not a kid's game. Now that there's real money to be made, real money is being invested.

Any statement you make about all botnet authors is wrong.

Re:SSL traffic (0)

Anonymous Coward | more than 4 years ago | (#30998640)

Please mod parent Troll. Self-taught + college educated + job is actually far better than someone who just goes to College and gets a job because they heard CS is where the money's at.

Re:SSL traffic (3, Interesting)

JWSmythe (446288) | more than 4 years ago | (#30991458)

    I can honestly say, with experience, that https only takes a trivial amount more CPU time than a http request.

    The honest references you will find showing that https was so much heavier than http, was when the blazing fast webservers were 133Mhz.

    You're in more danger of the DDoS filling up your pipe than bringing a server to it's knees. The bringing the server down could be accomplished just as easily as a http server. That is unless some genius decided that they needed an entire server farm for http, but only one or two machines for https, which would definately qualify it as "weak"

    The folks running the servers should be able to deploy countermeasures of some sort. If a number over some acceptable threshold are illegitimate requests, automatically block them. It's easy enough on a *nix box. I'm not talking about anything in the webserver itself either. The webserver should be able to initiate something as simple as an iptables/ipfilter rule. It's amazing how useful those can be, and if the threshold is calculated appropriately, it won't even bother legitimate traffic.

    You are right though, I don't see how these would disguise anything. If you have a list of places that are targets, that makes it more noticeable, not less, even if it is the CnC machine, or a drone.

Re:SSL traffic (1)

FooBarWidget (556006) | more than 4 years ago | (#30994688)

Is that so? I recently had to make a hardware budget for an SSL site, so I ran a few benchmarks on Apache and Nginx running on my MacBook Pro. Both of them handled only *50* requests/sec when serving static assets through SSL, as opposed to 3000+ when not serving through SSL. After some investigation it turns out that the SSL handshake is particularly expensive; once the handshake is done, the server can serve several thousand static assets per second. However not all browsers/routers/proxies support/allow HTTP keep-alive, and keep-alive is almost unused if most of your visitors only visit a few pages or if there's so much time between two requests that the browser disconnects the keep-alive connection. This makes SSL *a lot* more expensive than regular HTTP.

That said, if you have data to prove me wrong then that'd be good news for me.

Re:SSL traffic (1)

JWSmythe (446288) | more than 4 years ago | (#30998700)

    Well, not to argue, but this is a test I just did on my desktop at work. I used the 'ab' test program, that comes with Apache (full name is Apache Bench).

    800Mhz, 1.7Gb RAM available (256Mb shared to video)

    Slackware Linux (x86_64) 13.0.0.0.0

    Linux evil2 2.6.31.6 #8 SMP PREEMPT Thu Dec 3 14:44:04 EST 2009 x86_64 AMD Athlon(tm) Processor 2650e AuthenticAMD GNU/Linux

    It is running Apache 1.3.41 with mod_ssl

    The test was run as:

ab -n 10000 -c 100 http://localhost/ [localhost]

http:
Requests per second: 828.54 [#/sec] (mean)
https:
Requests per second: 56.65 [#/sec] (mean)

    You could say that it reflects your numbers, but......

    I also had top running during the tests. During the http test, the highest %CPU for ab was no more than 15%. During the https test, it took up all the spare CPU time there was (approx 40% steady). It wasn't contending with Apache, but with things like X and Firefox. The web server itself stayed pretty much idle the whole time. This would be an issue with the testing application, not the server being tested. This test would have been better run from another machine, towards a dedicated webserver, but I don't have anything to test on right now.

Re:SSL traffic (1)

asifyoucare (302582) | more than 4 years ago | (#30991580)

.. I don't see how sending packets to 'major websites' disguises the real communications in any way. Just filter those requests. The more 'major' the web site for the garbaage packets, the easier it is to distinguish them from the real packets ..

I agree. There's no entry in Stewart's blog, but darkreading.com [darkreading.com] quotes him as follows:

'By adding the initial header of an SSL conversation, they may be attempting to avoid closer scrutiny by less vigilant inspection devices," Stewart says. "And by sending a flurry of these connections to a number of legit 'decoy' sites, it helps the Pushdo C&C [command and control] traffic blend in and remain undetected in some cases," he says.'

So, he isn't saying it helps except in less well scrutinised networks. Still, it seems pretty weak to me.

FAGGOTS! (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#30991228)

you die of aids.

Up to something? (5, Interesting)

toleshei (749993) | more than 4 years ago | (#30991346)

"Site owners "would just see weird connections that don't seem to make sense," he said. "They look like they're trying to start an SSL handshake, but it comes in malformed and doesn't ever send anything after that first handshake attempt."" Is it possible that they've found a flaw in a specific Systems handling of SSL and are trying to see if the flaw exists elsewhere in an attempt to produce an exploit? I'm not really a security guy, but it seems like they're up to something specific. Otherwise why use SSL exclusively? wouldn't they want to diversify their requests?

Is it an attempt to break in? (1)

joeyadams (1724334) | more than 4 years ago | (#30991374)

I wonder if it's an attempt to hack into the servers to steal private keys and whatnot (that is, to torture-test the SSL implementations on those servers).

Re:Is it an attempt to break in? (1)

httptech (5553) | more than 4 years ago | (#31010078)

It's not. There's no exploit code sent, just random bytes and the replies are discarded.

who wants (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#30991388)

a FAT dick in their THROAT!!~?

How to stop bot nets (1)

Max_W (812974) | more than 4 years ago | (#30991392)

All it takes is to install an anti-virus and make a full scan you mom's and dad's PC next time.

Re:How to stop bot nets (5, Insightful)

Anonymous Coward | more than 4 years ago | (#30991564)

is that because the antivirus program makes the computer crawl to a halt so the bot program has no CPU resources left to run?

The FBI has already apprehended the culprits (2, Funny)

The FBI (1717712) | more than 4 years ago | (#30991430)

The FBI has apprehended the individuals responsible for the Pushdo botnet, but because the said individuals are minors, we have decided to file no charges if the said individuals apologized to everyone who had been negatively affected by the Pushdo botnet. Unfortunately, due to a typo, the said individuals issued a botnet command that is causing the botnet computers to keep trying to POST the following apology to the SSL port:

POST / HTTP/1.0
Referer: http://ir902.detention.fbi.gov/ [fbi.gov]
User-Agent: PushDo/1.0.1
Accept: */*
Content-type: application/x-www-form-urlencoded
Content-length: 1337

apology=We+apologize+for+any+inconvenience+our+childish+Pushdo+botnet+experiment+may+have+caused you.+Sincerely,+Billy+Pushman+and+Jimmy+Doe.

And they say obfuscation isn't a good defense (1, Interesting)

SlappyBastard (961143) | more than 4 years ago | (#30991532)

But, it does apparently make a very good smoke screen for a good offense.

Re:And they say obfuscation isn't a good defense (3, Funny)

fm6 (162816) | more than 4 years ago | (#30991588)

Obfuscation isn't good security. But, as any politician will tell you, it's excellent defense.

Junk SSL Connection (0, Troll)

cormander (1273812) | more than 4 years ago | (#30991550)

What exactly is a "Junk SSL Connection"? Please tell me it has nothing to do with the slang for a man's "area". The thoughts of "the goods" being attacked... oof.

Entropy depletion (5, Interesting)

xenocide2 (231786) | more than 4 years ago | (#30991664)

SSL/TLS at it's core generates "session keys" for communication; a string of random characters. It's possible they're trying to deplete the SSL servers of true entropy for some undisclosed attack; PRNG, for example.

Re:Entropy depletion (2, Interesting)

Anonymous Coward | more than 4 years ago | (#30991708)

Replying anon because I voted it up. Anyways it's amusing to me (a security geek) that so far only one person has gotten this, it's a pretty obvious reason (assuming of course the SSL attack is deliberate and actually aimed against these sites). I'd be curious to know if the attacker is collecting data and perhaps running Randomness Tests [wikipedia.org] against the results to see if this connection flooding is having any affect.

Re:Entropy depletion (0)

Anonymous Coward | more than 4 years ago | (#30992916)

For goodness sake, you don't have to apologize posting anonymously. Stop that.

Re:Entropy depletion (1)

theskipper (461997) | more than 4 years ago | (#30994480)

You're right, sorry about that.

Re:Entropy depletion (5, Interesting)

bobstreo (1320787) | more than 4 years ago | (#30991870)

Don't think it's that complex. From June 2009:
http://isc.sans.org/diary.html?storyid=6601 [sans.org]

Yesterday an interesting HTTP DoS tool has been released. The tool performs a Denial of Service attack on Apache (and some other, see below) servers by exhausting available connections. While there are a lot of DoS tools available today, this one is particularly interesting because it holds the connection open while sending incomplete HTTP requests to the server.

In this case, the server will open the connection and wait for the complete header to be received. However, the client (the DoS tool) will not send it and will instead keep sending bogus header lines which will keep the connection allocated.
The initial part of the HTTP request is completely legitimate:

GET / HTTP/1.1\r\n
Host: host\r\n
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)\r\n
Content-Length: 42\r\n

After sending this the client waits for certain time – notice that it is missing one CRLF to finish the header which is otherwise completely legitimate. The bogus header line the tools sends is currently:

X-a: b\r\n

Which obviously doesn't mean anything to the server so it keeps waiting for the rest of the header to arrive. Of course, this all can be changed so if you plan to create IDS signatures keep that in mind.

According to the web site where the tool was posted, Apache 1.x and 2.x are affected as well as Squid, so the potential impact of this tool could be quite high considering that it doesn't need to send a lot of traffic to exhaust available connections on a server (meaning, even a user on a slower line could possibly attack a fast server). Good news for Microsoft users is that IIS 6.0 or 7.0 are not affected.

At the moment I'm not sure what can be done in Apache's configuration to prevent this attack – increasing MaxClients will just increase requirements for the attacker as well but will not protect the server completely. One of our readers, Tomasz Miklas said that he was able to prevent the attack by using a reverse proxy called Perlbal in front of an Apache server.

We'll keep an eye on this, of course, and will post future diaries or update this one depending on what's happening. It will be interesting to see how/if other web servers as well as load balancers are resistant to this attack.

Re:Entropy depletion (0)

Anonymous Coward | more than 4 years ago | (#30993190)

Don't the packets send/received make up for the lost entropy?

Re:Entropy depletion (1)

NonSequor (230139) | more than 4 years ago | (#30993622)

Don't the packets send/received make up for the lost entropy?

I don't know very much about cryptography, but I'm thinking the same thing.

Intuitively, I'd expect the number of requests not controlled by the attacker to serve as an implicit entropy source for a PRNG, at least relative to that attacker. Intuitively, I'd expect the number of requests not controlled by the attacker to follow a Poisson distribution with lambda equal to the traffic frequency times the interval of time since the last request where the attacker was able to determine the state of the PRNG (no mean feat in itself).

For that attacker, the entropy rate of that Poisson process of other people's requests serve as a true entropy source. This would only be a viable attack vector if the attacker controls a large chunk of the server's traffic. I'd think that they'd be more likely to DDOS the server before managing to accomplish anything sneakier.

Re:Entropy depletion (1)

NonSequor (230139) | more than 4 years ago | (#30993648)

Intuitively, I'd expect the number of requests not controlled by the attacker to serve as an implicit entropy source for a PRNG, at least relative to that attacker. Intuitively, I'd expect the number of requests not controlled by the attacker to follow a Poisson distribution with lambda equal to the traffic frequency times the interval of time since the last request where the attacker was able to determine the state of the PRNG (no mean feat in itself).

EDIT FAIL!

I guess I liked that phrase so much that I just had to use it twice in a row.

Re:Entropy depletion (3, Interesting)

crypticwun (1735798) | more than 4 years ago | (#30995586)


1) The code function does NOTHING with any data returned by the server.
2) This version of pushdo is using SSLv3 to phone home (HTTP over SSL) to its C2 (Command & Control).
3) When looking purely at netflow records or using tcpdump/wireshark, you will see 30+ SSL connections taking place at once. Only 1-2 of those connections is to the C2.
3.5) Many admins don't set up matching PTR records in DNS, so you won't easily be able to map back the IPs to the "common"/well-known hostnames.
4) ... ?
5) profit!
The idea is to make it HARD, not impossible to identify the C2 systems. Note well that the C2's might never connect back to the botnet client systems. Instead another tier of slightly more disposable hosts are likely to perform that function.

Re:Entropy depletion (1)

httptech (5553) | more than 4 years ago | (#31009922)

They're not. The connections are far too infrequent (15 connections, then sleep for 30 hours).

#irc.trolltalk.com (-1, Troll)

Anonymous Coward | more than 4 years ago | (#30991792)

time for a bayesian protocol filter? (1)

StripedCow (776465) | more than 4 years ago | (#30993238)

Roughly the same techniques used to identify spam can be used to identify abuse of a protocol. For example, there exist bayesian intrusion detection algorithms.

Maybe it is time for people to start using those techniques and figure out that something is wrong almost from the getgo.

Re:time for a bayesian protocol filter? (2, Insightful)

zix619 (802964) | more than 4 years ago | (#30995182)

the problem is that it takes so much CPU load and time for training and in addition what to do with the false positives?

infected computers (2, Informative)

viralMeme (1461143) | more than 4 years ago | (#30993296)

What desktop Operating System does this Pushdo botnet require to operate ?

"Once executed the malware first tests to see if it's currently running as the hardcoded value "rs32net.exe" in the system folder (C:\Windows\System32 [trendmicro.com] by default)"

Huh? (3, Interesting)

guyminuslife (1349809) | more than 4 years ago | (#30993430)

I don't get it. Could someone please explain this to me?

If they're trying to disguise their traffic to the command-and-control center, how does this help? If you get a lot of malformed requests from a particular host, then if you're an investigator, it's like the infected computers are advertising themselves as zombies. And if they're sending these requests to major web sites, how does this disguise the requests they're making to the (presumably non-major website) control center? Couldn't you just say, "Well, this computer made 300 malformed SSL requests to Facebook, Twitter, et cetera, and one malformed request to , let's find that guy!"

I'm seriously confused.

Re:Huh? (1)

httptech (5553) | more than 4 years ago | (#31009968)

I think they're attempting to evade brain-dead automated protocol inspection, not trying to fool a human.

$employer is on the target list of pushdo drones (2, Informative)

nfsilkey (652484) | more than 4 years ago | (#30995464)

According to our graphs, our targeted frontend is taking the drone's trashy SSL requests like a champ (reverse-proxies are humming as expected, no inordinate load, etc).

You too can see if you are on the hitlist: http://www.shadowserver.org/wiki/pmwiki.php/Calendar/20100129 [shadowserver.org]

Over the last 24 hours add more to the list! (2, Informative)

NSN A392-99-964-5927 (1559367) | more than 4 years ago | (#31006890)

Apple, Customs and Excise UK Inland Revenue. Greater Manchester Police. My friend is a dev and net admin at PayPal/Ebay and although he shall remain nameless for his privacy. In his own words bunch of lazy fat cat bastards. Sorry for swearing, but he has been a guru in IT for the past 30 years and a top programmer. He said he is trying to undo and secure systems where security is very lax indeed and said it is like banging his head against a brick wall with some very senior management.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?