×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Thwarting New JavaScript Malware Obfuscation

kdawson posted more than 5 years ago | from the drm-never-works-give-it-up dept.

Security 76

I Don't Believe in Imaginary Property writes "Malware writers have been obfuscating their JavaScript exploit code for a long time now and SANS is reporting that they've come up with some new tricks. While early obfuscations were easy enough to undo by changing eval() to alert(), they soon shifted to clever use of arguments.callee() in a simple cipher to block it. Worse, now they're using document.referrer, document.location, and location.href to make site-specific versions, too. But SANS managed to stop all that with an 8-line patch to SpiderMonkey that prints out any arguments to eval() before executing them. It seems that malware writers still haven't internalized the lesson of DRM — if my computer can access something in plaintext, I can too."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

76 comments

SANS (1, Funny)

Anonymous Coward | more than 5 years ago | (#24201135)

I'm still not sure what I think of them.
I mean, it's a great idea. But they update their diary every day, which means for the most part, it's totally boring crap. Today's entry is a little different.

I still think of SANS as a bunch of old guys all sort of pontificating about the most mundane things. Wind back 5 years and I think they had a valid part to play, especially with the amount of viruses and worms flying around. These days, not so much. Security is so much higher on everyone's radar that they're a bit old in the tooth now.

This is still good work though and I do appreciate it. I just wonder if they need "handlers" and daily updates anymore, the Internet just isn't that risky anymore.

Tim

Re:SANS (2, Insightful)

sm62704 (957197) | more than 5 years ago | (#24201931)

But they update their diary every day, which means for the most part, it's totally boring crap.

Welcome to my slashdot journal (NSFW)

they're a bit old in the tooth now

Piece of cake, easy as pie. The saying is "long in the tooth", comrad.

the Internet just isn't that risky anymore.

You're not paying attenton.

Re:SANS (0)

Anonymous Coward | more than 5 years ago | (#24202145)

Welcome to my slashdot journal (NSFW)

Thank you for the NSFW tag - I often find your journal amusing, but once got burned because some of the language tripped a filter, inspired IT to read the page I was looking at (NSFW), and then inspired IT to look and see just how much time I actually spend on /. Ouch.

the Internet just isn't that risky anymore.

You're not paying attenton.

I actually agree with GPP. People are still getting infected, but only because they're downloading infected warez or leaving unpatched systems hooked up to teh interwebs. "Normal" use with a patched system (MS or Linux) is pretty safe these days.

Cheers.

A case of "duh" (2, Interesting)

Annymouse Cowherd (1037080) | more than 5 years ago | (#24201175)

Is it just me or is this way of getting around it mind-blowingly obvious.

The techniques the malware writers are using are quite interesting though, i've never heard of arguments.callee.

Re:A case of "duh" (3, Interesting)

kesuki (321456) | more than 5 years ago | (#24204449)

"Is it just me or is this way of getting around it mind-blowingly obvious."

even more mind blowingly obvious, is noscript. it's pretty hard for java/javascript based malware to infect you when the browser won't automatically launch javascript or java.

DRM and copy protection schemes (3, Informative)

spion666 (922711) | more than 5 years ago | (#24201229)

It seems that malware writers still haven't internalized the lesson of DRM -- if my computer can access something in plaintext, I can too.

In fact, thats the lesson from any digital copy protection scheme, some of which precede DRM (at least the term DRM)

Re:DRM and copy protection schemes (4, Interesting)

panaceaa (205396) | more than 5 years ago | (#24202109)

I'm glad you highlighted that line of the summary. The point of the obfuscation was to slow down analysis of the code and require special tools (SpiderMonkey) that average web users don't have. Here the malware author clearly won. The article author spent hours figuring out a new obfuscation technique and writing an article about it. If there are malware detectors, they have to be updated to detect the new obfuscations.

This is not the traditional DRM argument. No one's trying to decode a video or music file they have legal rights to access. This is a malware arms race: The point IS to hide what's going on, not to lock things down. What's more interesting here, and not even discussed, is the parallel between Javascript malware development and computer viruses. The technique the author uncovered is an adaptation of polymorphic virus concepts into web malware. And while the technique is something many developers could come up with, I haven't heard being used in practice yet, so it's likely a noteworthy step in the arms race.

Re:DRM and copy protection schemes (1)

Jherek Carnelian (831679) | more than 5 years ago | (#24203821)

The point IS to hide what's going on, not to lock things down.

Right, this is not security through obscurity folks.
It is obscurity through obscurity!

Re:DRM and copy protection schemes (1)

nahdude812 (88157) | more than 5 years ago | (#24210241)

Plus Firebug 1.2 already does what their patch does. If you want to see what the final execution result is, click the dropdown in the Scripts window to see the text of all eval() calls.

How long until they do setTimeout("final code", 1) instead of eval(), and how long until they do document.write("<div id='foo' onclick='malware.code;'></div>"); document.getElementById('foo').onclick(); etc? As gp said, it's a malware arms race, they're changing their obfuscation techniques to bypass the current market tools.

There's a lot of ways to generate on-the-fly code to execute in javascript, automated tools are going to have a hard time coping with all possible variants.

I'm not sure it's more than a speedbump (1, Interesting)

Anonymous Coward | more than 5 years ago | (#24203945)

> I'm glad you highlighted that line of the summary. The point of the obfuscation was to slow down analysis of the code and require special tools (SpiderMonkey) that average web users don't have. Here the malware author clearly won. The article author spent hours figuring out a new obfuscation technique and writing an article about it.

Actually, it looks like they spent more time writing about it than solving it. That 8-line patch isn't exactly complex, and it appears that it'll kill most techniques they can use. Their only hope now is to rely on various variables that won't be present in the interpreter, but there's a limit to how complex those can be.

Also there's the fact that the guy's script was buggy thanks to case sensitivity (he named a file .HTM instead of .htm). But you have a point about them only needing to slow folks down, even though I don't think they're getting slowed down very much compared to all the work going into those obfuscation techniques.

Re:DRM and copy protection schemes (1)

kesuki (321456) | more than 5 years ago | (#24204527)

"The point of the obfuscation was to slow down analysis of the code and require special tools (SpiderMonkey) that average web users don't have."

noscript is an easy firefox add-on. if you're advising 'normal' people how to use the internet safely, you'd have told them to use noscript and only allow really trusted sites that you can't live without.

personally, I don't trust any site that much. besides, with noscript, slashdot tells you to use the old style instead of the new style layout.

Re:DRM and copy protection schemes (1)

CodeBuster (516420) | more than 5 years ago | (#24205865)

Here the malware author clearly won

No more than the RIAA has "won" by imposing or using DRM on their music downloads and on iTunes. In that case it only takes one cracked version to leak out before everyone else gets the benefit of the original cracker's behavior ad-infinitum. Likewise, once the detection is worked into a Javascript blocking and filtering tool, such as no-script [mozilla.org] , everyone using no-script benefits from the original analysis (it doesn't have to be cracked afresh each time it comes up). So the malware author is really only inconveniencing the really uninformed users and not those who understand and take steps to protect their hosts from attack.

Re:DRM and copy protection schemes (0)

Anonymous Coward | more than 5 years ago | (#24205947)

Yeah the javascript is obfuscated so "average web users" can't read it... riiight.

Re:DRM and copy protection schemes (0)

Anonymous Coward | more than 5 years ago | (#24206337)

I think a lot of people are trying to decode something they have legal rights to access. See DRMed music who's authentication servers are no longer functioning, see any game who's anti-piracy is a hindrance to general operation or use of the game.

Re:DRM and copy protection schemes (1)

asrail (946132) | more than 5 years ago | (#24208089)

SpiderMonkey is the Mozilla's C's JavaScript engine - they got one in Java, called Rhino and one in JavaScript that I don't remember its naming.

SpiderMonkey is used on the Firefox browser, so a millions of people got it handy.

It's fairly easy to make a graphical interface to allow any end user to copy or save the data to a file.

Everywhere you have to update your techniques when new malwares appear.
The better your heuristic, the less you have to update.

Threat levels? (0, Troll)

martinw89 (1229324) | more than 5 years ago | (#24201241)

I guess I'm new to this whole Internet thing; I haven't been to SANS ICS before. But what's up with the color coded threat level indicator? Doesn't that seem a little pointless and similar to the ridiculed DHS Threat Level? I don't know, I respect their opinion, it's just harder to trust someone with an "internet threat level" indicator.

Re:Threat levels? (4, Insightful)

liquidpele (663430) | more than 5 years ago | (#24201395)

Everyone [iss.net] does [symantec.com] it [bitdefender.com]

Re:Threat levels? (2, Insightful)

martinw89 (1229324) | more than 5 years ago | (#24201471)

Ouch, I didn't realize how common this was. Feel free to moderate the grandparent post into oblivion.

Re:Threat levels? (1)

Jherek Carnelian (831679) | more than 5 years ago | (#24203771)

Ouch, I didn't realize how common this was. Feel free to moderate the grandparent post into oblivion.

No, don't. Just because 'everyone' does it doesn't make it any less stupid.
Its like the terrorism threat levels - they don't have any correspondance to what is likely to happen to you as an individual. Even if SANS says the "internet threat level is green" you can still get DDOS'd, or get infected by a virus, or browse to the wrong website and get suckered by a cross-site scripting attack, or any number of other things.

The internet is just as dangerous to individuals regardess of what the 'threat level' is.

Re:Threat levels? (1)

liquidpele (663430) | more than 5 years ago | (#24206597)

The threat levels are elevated for things like exploits in the wild for remote vulnerabilities with no patches, or other significant factors that businesses take into account. The pretty colors are there because other websites and companies partner with the creators of the threat levels, and display them for their management or customers. It's purely marketing, but they do have a real purpose, they grab your attention so you can look at the details of whatever caused it to elevate.

Re:Threat levels? (1)

atraintocry (1183485) | more than 5 years ago | (#24208097)

You were right the first time...it's marketing. Plenty of security sites out there aimed at IT folks that discuss these things rationally and aren't trying to scare you. Similarly, I've dealt with levelheaded IT guys and I've dealt with the ones who bust through the door talking about obscure exploits, hoping to catch someone who can't smell the BS and will pay for fake peace of mind.

Re:Threat levels? (0)

Anonymous Coward | more than 5 years ago | (#24239231)

Those are all general links. I haven't found any analysis of this particular threat.

Funding (0, Flamebait)

Joebert (946227) | more than 5 years ago | (#24201261)

Who pays these freakin people ?!

I could sit here all day comming up with ways to put ghosts in the machine !

Re:Funding (1)

CatBegemot (1326539) | more than 5 years ago | (#24201669)

Sorry, your job application has been denied. If you don't know where we are you don't belong here. :)

Re:Funding (1)

Joebert (946227) | more than 5 years ago | (#24201835)

Apparently I don't belong on either side of the fence. I suppose I'll just have to burn it down with my bait of flame. :)

first post? (2, Funny)

hesaigo999ca (786966) | more than 5 years ago | (#24201269)

This is too much, now we all will have to download a pre validator for javascript to view the code (what does this code do, i can't read this, I am an 80 year old grandmother...) before going to the webpage and view it...sucks to go on the web these days!

Re:first post? (0)

Anonymous Coward | more than 5 years ago | (#24201475)

Or you could turn off javascript.

Re:first post? (1)

loftwyr (36717) | more than 5 years ago | (#24201993)

No, no... it prints it out to itself and then executes it. You'll see no more than you did before.

Unless you're a hacker and then it will stick it's tongue out at you first.

Baby & Bathwater? (4, Informative)

XanC (644172) | more than 5 years ago | (#24201275)

There are certainly legitimate uses of eval, and legitimate reasons to "obfuscate". Like to compress the script that you send to each & every client. The savings in bandwidth for you (and for them, especially if they're on dialup) can add up. For example: http://www.javascriptcompressor.com [javascriptcompressor.com]

Re:Baby & Bathwater? (1)

larry bagina (561269) | more than 5 years ago | (#24201359)

gzip compression is just as effective.

Re:Baby & Bathwater? (1)

XanC (644172) | more than 5 years ago | (#24201483)

Well, if you leave off the base62 encode option, you get a file that's "prepped" for better gzipping. But of course that doesn't require an eval, which was the point of this whole thread, so you're right about that.

I've also noticed, though, that IE will barf on long Javascript files, so doing the base62 compression on the Javascript even with gzip is a workaround for that.

Re:Baby & Bathwater? (1)

TLLOTS (827806) | more than 5 years ago | (#24203879)

Simply minifying your scripts by stripping out comments and unneeded whitespace will do almost as good a job compressing the data with eval. Add in gzipping and the difference is negligible, plus there's no additional delay on the client side when that script is decompressed for each and every page that uses it.

Re:Baby & Bathwater? (0)

Anonymous Coward | more than 5 years ago | (#24206179)

Why don't you run YOUR code on YOUR equipment and keep that exploitable ineptitude off my machines. If it aint HTML, keep it on the server and off my lawn.

Save yourself some headache, just gzip it (2, Informative)

patio11 (857072) | more than 5 years ago | (#24208273)

It isn't an either/or choice, but programs with verbose variable names (which is typically one of the first targets of javascript compression: "replace timeSinceLastUpdate with r") compress disgustingly well. You may find that the gzip compression is effective enough that the obfuscation isn't worth the various attendant headaches (maintaining two versions of the code, etc).

So how does this solve anything in the real world? (0)

Anonymous Coward | more than 5 years ago | (#24201293)

This might sound like a troll, but it's just my limited understanding of this issue. So if we can detect the eval statements using this new technique, can we differentiate between what is malware and what is not? Or do legitimate scripts use eval too?

Re:So how does this solve anything in the real wor (1)

larry bagina (561269) | more than 5 years ago | (#24201405)

if you're using JSON (fairly common now), there's a good chance you're using eval(). Generally, it's overused and can be replaced with other techniques in a modern browser/js interpreter

Re:So how does this solve anything in the real wor (0)

vrmlguy (120854) | more than 5 years ago | (#24201621)

if you're using JSON (fairly common now), there's a good chance you're using eval().

Jeez, I hope not. I thought that by now everyone knew that using eval() is setting yourself up for failure.

Re:So how does this solve anything in the real wor (2, Interesting)

cparker15 (779546) | more than 5 years ago | (#24203395)

See http://www.json.org/js.html [json.org]

If you're using JSON, you're using eval(). Sure, there are some workarounds that avoid calling the eval() function directly, but in the end, they all eval-uate remote code.

JSON parsers use eval() after checking the JSON string to make sure it's actually a JSON string.

cat http://www.json.org/json2.js | grep eval(

Re:So how does this solve anything in the real wor (1)

vrmlguy (120854) | more than 5 years ago | (#24223095)

If you're using JSON, you're using eval(). Sure, there are some workarounds that avoid calling the eval() function directly, but in the end, they all eval-uate remote code.

Did you even bother to read that page that you directed me to?

To defend against this, a JSON parser should be used. A JSON parser will recognize only JSON text, rejecting all scripts. In browsers that provide native JSON support, JSON parsers are also much faster than eval.

So, you're claiming that a JSON parser, which is faster than eval, checks its input and then calls eval. I think I see a contradiction.

Or are you talking about legacy browsers that don't yet provide JSON support? In that case, I hope that you aren't invoking eval either directly or from some home-grown function library, but are using that json2.js that you point to; yes, it sometimes (not always!) uses eval, but only after checking that the browser doesn't provide a native JSON object. Furthermore, if you eliminate comments and blank lines from json2.js, you're left with 174 lines of code, of which one line invokes eval, and most of the rest make sure that there isn't anything bad hidden inside the text. I suspect that those 173 lines of code are better than anything that you or I could whip out on short notice.

Re:So how does this solve anything in the real wor (1)

cparker15 (779546) | more than 5 years ago | (#24369667)

Was there even a point to your tirade?

My comment was in reply to your sweeping generalization that “everyone” knew using eval() is just setting oneself up for failure. json2.js is proof that, with adequate attention given to security, eval() usage isn't a problem.

To the best of my knowledge, as of this writing, the only browser that supports native JSON is Firefox 3/Mozilla 1.9: http://developer.mozilla.org/en/docs/nsIJSON [mozilla.org] -- this still excludes most people, however.

The rest all require an external parser, such as Crockford's, which eval()s JSON code for everyone else. If you personally feel like writing a JSON parser based entirely on a combination of regex and String.substring()/String.indexOf(), for the sole purpose of avoiding evil eval(), be our guest.

Re:So how does this solve anything in the real wor (1)

vrmlguy (120854) | more than 5 years ago | (#24370095)

My point was that programmers shouldn't use eval() in Javascript, just like programmers shouldn't use goto statements. That doesn't mean that you can't use tools that use those features (I use yacc and lex many times a year), it just means that using them is very hazardous and best avoided. And yes, I realize that this means that there are exceptions to the rule, since it is programmers who write those tools, but those exceptions are very, very, limited. Thinking that you can whip out some code that uses eval() just because json.js uses it is a supreme act of hubris. I wouldn't let anyone I work with use an eval(), no matter what the circumstances, because circumstances change and it's unlikely that the code will change in lockstep. Nor would I use some random library that someone found somewhere. I'd want code that had been looked at by a lot more eyes than mine and tested in the field in a lot of deployments. Crockford's stuff meets that criteria. Anything you might come up with is unlikely to.

Re:So how does this solve anything in the real wor (1)

QuestionsNotAnswers (723120) | more than 5 years ago | (#24203781)

If someone can modify your JSON messages, they can inject script into any .js files requested. If your JSON generation is unsafe, then go home.

Re:So how does this solve anything in the real wor (4, Informative)

Anders (395) | more than 5 years ago | (#24201407)

This is not a detection method. It is merely an aid in reverse engineering, once you have found some malware that you want to analyze.

document.referrer (4, Interesting)

Anders (395) | more than 5 years ago | (#24201299)

I turn off referrer headers for privacy (set network.http.sendRefererHeader to 0 in about:config in Firefox). Now it seems that it can also save me from malware :-). Why would you want it enabled, anyway?

Re:document.referrer (2, Informative)

ypctx (1324269) | more than 5 years ago | (#24201373)

Many sites won't work without it, mainly to prevent "hotlinking".

Re:document.referrer (2, Informative)

Anders (395) | more than 5 years ago | (#24201477)

Many sites won't work without it, mainly to prevent "hotlinking".

That is about as effective as User-Agent sniffing.

This Firefox addon [mozilla.org] gives you arbitrary Referer headers on a per-site basis.

Re:document.referrer (0)

Anonymous Coward | more than 5 years ago | (#24201583)

Actually, it is effective because it wants to deter sites that hotlink, not users that happen to stumble upon a site that hotlinks.

But if the site breaks with no referrer, it's bad. It should allow the 'correct' referrer as well as an empty one, and reject all others.

Re:document.referrer (2, Informative)

ArcticFlood (863255) | more than 5 years ago | (#24201607)

Most "hotlinking prevention" methods (either in a .htaccess or in PHP) that I've seen allow no referrer, since no referrer usually means it was a bookmark or a URL entered by hand. Since this also allows people to copy and paste links to site, these methods are generally pointless unless there is a real problem.

Re:document.referrer (1)

ypctx (1324269) | more than 5 years ago | (#24201979)

If you allow no referrer for a web page, you will usually get no traffic from outside, like from search engines and other pages that might link to you.
If you allow any referrer for an image, you are allowing anybody to embed this image into their page, thus stealing your bandwidth. To prevent that, you only allow your own pages to refer to your own images. Of course this can be spoofed manually by the client, but too complicated for most people.
A funny thing is that the "HTTP_REFERER" header name is wrongly spelt, and it made it so to the the HTTP RFC.

Re:document.referrer (2, Informative)

ArcticFlood (863255) | more than 5 years ago | (#24202845)

I was unclear. I meant an empty referrer, which occurs when you weren't referred by a URL (such as typing the URL manually or clicking a bookmark). If you prevent the use of an empty referrer, your page cannot be bookmarked or manually typed in the address bar, which is why it is allowed.

Re:document.referrer (0)

Anonymous Coward | more than 5 years ago | (#24210039)

I'd dispute the "many" part of your statement, anyway, but... even those that do - the ones I've come across, anyway - only check that your referrer header doesn't indicate you're coming from *another* site; if there's no referrer header, they'll still give you whatever you were asking for.

Re:document.referrer (3, Interesting)

geminidomino (614729) | more than 5 years ago | (#24201381)

I turn off referrer headers for privacy (set network.http.sendRefererHeader to 0 in about:config in Firefox). Now it seems that it can also save me from malware :-).

Why would you want it enabled, anyway?

Silly websites that check it as some sort of "security." Easily foiled by sending the site's own URL as the referer though.

Re:document.referrer (1)

vrmlguy (120854) | more than 5 years ago | (#24201737)

I turn off referrer headers for privacy (set network.http.sendRefererHeader to 0 in about:config in Firefox). Now it seems that it can also save me from malware :-).

Why would you want it enabled, anyway?

Silly websites that check it as some sort of "security." Easily foiled by sending the site's own URL as the referer though.

Of course, that might revive any Javascript malware.

Re:document.referrer (1)

VGPowerlord (621254) | more than 5 years ago | (#24202057)

I turn off referrer headers for privacy (set network.http.sendRefererHeader to 0 in about:config in Firefox). Now it seems that it can also save me from malware :-).

Why would you want it enabled, anyway?

Silly websites that check it as some sort of "security." Easily foiled by sending the site's own URL as the referer though.

Even that doesn't work for all sites. Newegg [newegg.com] , for example, won't let you finish checking out if you forge the referrer. I had to add an exception to it to RefControl.

P.S. I have RefControl set to Forge by default, which sends the site's base URL as the referrer.

Re:document.referrer (2, Interesting)

Nos. (179609) | more than 5 years ago | (#24203681)

I check the referrer header for images on some sites, not for security, but for reducing bandwidth thieves doing hotlinking. On more than one occasion folks have linked to images on busy forum sites which costs me bandwidth. Checking that the referrer is either the local site or blank reduces that bandwidth waste to virtually zero. Yes, some will still get through, but the few minutes it takes to add to the virtual host configuration in Apache is well worth it.

Re:document.referrer (1)

atraintocry (1183485) | more than 5 years ago | (#24208057)

I agree with your method, but "bandwidth thief" is misplaced. Nothing wrong with a referrer check, and I don't hotlink on forums...it's rude. But that's it. Rude, at best...not thievery. You posted the image, and it's your hosting bill. So if you only want it served in a specific context (certain referrer, certain browser, who knows) it's your responsibility to host it that way. Otherwise, people's browsers are asking for the image, and you're serving it. If you don't like giving out Halloween candy, don't answer the door.

Re:document.referrer (1)

Nos. (179609) | more than 5 years ago | (#24212679)

If you don't like giving out Halloween candy, don't answer the door.

Its more like my neighbour handing out the candy I bought. He gets the "credit" while I paid for the goodies.

Re:document.referrer (1)

Snaller (147050) | more than 5 years ago | (#24205027)

"Why would you want it enabled, anyway?"

To access the thousands of sites which check it to make sure nobody isn't "stealing" their bandwith.

Re:document.referrer (1)

Bisqwit (180954) | more than 5 years ago | (#24207885)

Because website authors can use the referrer field to improve their services, by figuring out which access patterns are most common, and which links should be made more or less prominent.
By hiding that information, you are depriving them of that possibility, and you are therefore depriving the Internet a certain means of becoming better.

Re:document.referrer (1)

IdeaMan (216340) | more than 5 years ago | (#24230981)

I thought this was Slashdot, and privacy trumped all?
Any time we give information away it gets used against us. Thanks to one of the previous posters as of today I now use RefControl.

Re:document.referrer (1)

abigsmurf (919188) | more than 5 years ago | (#24209563)

Some sites could potentially use it to aid in navigation. It's not a great option to use it but it can be better than using back options, especially if there are lots of forms used in the site.

Never actually used it like that (prefer to store that kind of thing in session variables if I'm forced to) but I could see someone doing so

DRM Lesson (2, Insightful)

MyLongNickName (822545) | more than 5 years ago | (#24201329)

It seems that malware writers still haven't internalized the lesson of DRM â" if my computer can access something in plaintext, I can too.

The malware writers don't need a 100% success rate. They are simply tring to get their software on enough machines to build a nice bot empire.

Re:DRM Lesson (1)

Urd.Yggdrasil (1127899) | more than 5 years ago | (#24202771)

Exactly, the idea behind javascript obfuscation is to get past automated tools (antivirus engines) not flesh and blood analysts, and it does the job very well. It really isn't the same thing as DRM.

stop (5, Funny)

ypctx (1324269) | more than 5 years ago | (#24201335)

stop all that with an 8-line patch to SpiderMonkey

Cool, and now malware engineers will lose their jobs, you insensitive clods! Internet Explorer to the rescue!

Re:stop (0)

Anonymous Coward | more than 5 years ago | (#24326075)

Or you can just de-obfuscate the IE specific code as well:

http://refrequelate.blogspot.com/2008/07/pwning-javascript-malware-soup-to-nuts.html

old school protection (1)

Threni (635302) | more than 5 years ago | (#24201487)

lets hope they don't try stuff like this:

http://en.wikipedia.org/wiki/Rob_Northen_copylock [wikipedia.org]

it was hard. I guess there's no way to use special cpu modes, but you could still knock up a large amount (megs) of seemingly random data which contains code which you decrypt a few bytes at a time, re-encrypting the 'code' you just executed and hide, within thousands of jumps, loops and other messed up logic the actual guts of your code.

Its not obfuscation (5, Informative)

Anonymous Coward | more than 5 years ago | (#24201879)

Sure it may look like the attacker is cleverly trying to obfuscate their malware from prying eyes but usually they could care less about that. By the time you go reversing their code, they've already gotten the bulk of their victims anyway.

Rather, they're most often using it to make the code easy to replicate elsewhere. A lot of places they'll host it will inadvertently hiccup on certain characters in the code and change them. Like < to &lt;, or + to space, or new line chars to end the string. Using an encoder that converts everything to alphanumeric is much easier to guarantee a successful propagation.

Especially true for XSS worms

eval() (1)

smoker2 (750216) | more than 5 years ago | (#24202131)

Surely, that should read -
evil()
?
It is after all, an inbuilt function of Javascript.

Re:eval() (0)

Anonymous Coward | more than 5 years ago | (#24202407)

No, it should read review(). As in, "Tomorrow we'll go over the reviewuation of your report on Medireview [wikipedia.org] death rates."

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...