×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Secretly Monopolizing the CPU Without Being Root

CmdrTaco posted more than 6 years ago | from the because-you-shouldn't dept.

Security 250

An anonymous reader writes "This year's Usenix security symposium includes a paper that implements a "cheat" utility, which allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring that the programs would get 99% of the CPU cycles, regardless of the presence of any other applications in the system, and in some cases (like Linux), in a way that keeps the program invisible from CPU monitoring tools (like 'top'). The utility exclusively uses standard interfaces and can be trivially implemented by any beginner non-privileged programmer. Recent efforts to improve the support for multimedia applications make systems more susceptible to the attack. All prevalent operating systems but Mac OS X are vulnerable, though by this kerneltrap story, it appears that the new CFS Linux scheduler attempts to address the problem that were raised by the paper."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

250 comments

What does this mean? (0)

ajs (35943) | more than 6 years ago | (#19825129)

... allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring ...
What?! I'm really not sure what's being said here. I understand the idea behind this, but the wording of the Slashdot piece is difficult to penetrate, even by Slashdot standards.

I'm assuming that we're saying that this application can get 99% of the time-slices on an otherwise occupied system, starving other tasks for resources. I'd be interested in hearing how this plays with the latest scheduler for the Linux kernel, which is supposed to favor the most needy applications.

Re:What does this mean? (0, Offtopic)

ajs (35943) | more than 6 years ago | (#19825151)

I missed the last sentence of the blurb, which does address CFS in the latest Linux kernel...sorry about that. ... of course, Slashdot doesn't let you post a retraction right away.... grrr! ... Still waiting... ... this is getting old ...

Re:What does this mean? (4, Insightful)

SatanicPuppy (611928) | more than 6 years ago | (#19825441)

I don't know. I think retractions would screw with everything else. If you make a boneheaded statement (and I've done it more than once myself), it should stand. Otherwise, everyone who responds to correct your misstatement will look insane, and it'd be hard to metamod, because the comments wouldn't necessarily fit the context anymore, etc.

Re:What does this mean? (5, Interesting)

networkBoy (774728) | more than 6 years ago | (#19825547)

Why not leave the post but allow a "retracted" tickbox? Thus at least the owner of the comment can effectively say "I was wrong, boneheaded, whatever" without having to post another comment and wait two minutes to do it? and all that shows up it a one-liner under the comment:
This comment has been retracted by its poster

-nB

Re:What does this mean? (4, Insightful)

SatanicPuppy (611928) | more than 6 years ago | (#19825669)

That'd be fine, or even cool. It'd deflect the inevitable storm of 500 people saying, "Wrong n00b!" and not reading down far enough to see that you admitted it already, and let the whole discussion move on to more productive things.

Re:What does this mean? (1)

ajs (35943) | more than 6 years ago | (#19825679)

By "retraction", I meant that in the sense that newspapers use the term: the publication of a statement which redacts a previously published statement (e.g. my post in response to my initial post). The fact that Slashdot won't let me post a reply to my own post for a minute means that I sit there hitting "submit" on my one-line, "oops, I meant..." post for a minute. It's just annoying.

The ability to edit one's comments would be nice, but I'd only want to see that kind of feature if you could actually review the edit HISTORY of a comment, which would be a pretty serious change for Slash (the engine that runs Slashdot).

Re:What does this mean? (5, Informative)

pauljlucas (529435) | more than 6 years ago | (#19825335)

What?! I'm really not sure what's being said here. I understand the idea behind this, but the wording of the Slashdot piece is difficult to penetrate, even by Slashdot standards.
I hard a hard time reading it as well, but then I saw it (kind of like when you suddenly "see" the picture in a stereogram). Proper punctuation, whitespace, formatting, and font changes help a lot. It should have been:

.. allows any non-privileged user to run his/her program, like so:

cheat 99% program

thereby insuring ...

where cheat is the name of the compiled utility that lets you "cheat", 99% is an argument to cheat, and program is the name of some other program that you want to run at 99% of the CPU. I.e., the command line syntax resembles renice.

Re:What does this mean? (5, Funny)

Da Fokka (94074) | more than 6 years ago | (#19825403)

If you reply, do so only to what I explicitly wrote. If I didn't write it, don't assume or infer it.


You gun-toting marxist redneck zealot astroturfers make me sick!

Inevitable reply (4, Funny)

lilomar (1072448) | more than 6 years ago | (#19825713)

My mother is a gun-toting marxist redneck zealot astroturfer, you insensitive clod!

A Useful Tool (4, Funny)

Bios_Hakr (68586) | more than 6 years ago | (#19825147)

I run several websites off of a single host. If I need to login to do maintenance during peak hours, I'm slowed by Apache and MySQL. This would be a nice utility if it were wrapped into SUDO.

Re:A Useful Tool (3, Informative)

CastrTroy (595695) | more than 6 years ago | (#19825199)

you could always renice apache and mysql down to a lower priority. Possibly in a log-on/log-off script which would change the priorities and then reset them when you log out.

Re:A Useful Tool (4, Insightful)

cichlid (463444) | more than 6 years ago | (#19825331)

"you could always renice apache and mysql down to a lower priority. Possibly in a log-on/log-off script which would change the priorities and then reset them when you log out."

Much easier to just renice your root shell automatically at login

Re:A Useful Tool (2, Insightful)

oglueck (235089) | more than 6 years ago | (#19825981)

Still thread creation can kill you. Renicing a fork bomb won't give you more cycles for your shell.

Re:A Useful Tool (0)

Anonymous Coward | more than 6 years ago | (#19825207)

The 'nice' command might be something for you...

Re:A Useful Tool (1)

vsavkin (136167) | more than 6 years ago | (#19826139)

If you use sudo, you can just renice your root shell to higher priority, or even make it real-time process.

So, is vista security good enough.... (0)

Anonymous Coward | more than 6 years ago | (#19825149)

that others are starting to look after the *nix world for weaknesses? Once windows is equal or better than *nix in terms of security, then all the security and malware people will start looking at us.

Re:So, is vista security good enough.... (1, Informative)

Anonymous Coward | more than 6 years ago | (#19825191)

People have been looking for and exploiting *nix vulnerabilities long before Windows was on the scene.

Re:So, is vista security good enough.... (2, Interesting)

dbIII (701233) | more than 6 years ago | (#19825371)

Once windows is equal or better than *nix in terms of security

That isn't likely to happen without a change in attitude due to both starting furthur behind and progressing more slowly. The current malware situation looks like bad SF and a morality tale of what happens when you allow really stupid things to happen (eg. letting arbitrary code embedded in images run - hopefully that person was dismissed from Microsoft).

Re:So, is vista security good enough.... (1)

KingMotley (944240) | more than 6 years ago | (#19826715)

You should pick better examples. That particular problem was caused by Microsoft using a very well known OPEN SOURCE library for handling image functions. It affected many applications (including ones in linux). Now that you know that, are you still advocating that Microsoft should stop having anything to do with open source software? Didn't think so.

Google-cache article (3, Informative)

Anonymous Coward | more than 6 years ago | (#19825153)

For those harboring poisonous grudges against PDFs, the Googlerised HTML version is here [72.14.235.104].

Re:Google-cache article (5, Informative)

brunascle (994197) | more than 6 years ago | (#19825447)

and for those who dont have the time to read the paper...

it works by avoiding running during the exact moment of a clock tick (which would be the moment when CPU usage per-process is checked). to start running immediately after a clock tick is (apparently) easy, but to stop before the next tick is harder. the paper suggests using some kind of get_cycles assembly instruction to count how many CPU cycles there are per clock tick, and use that number to gauge when the next clock tick is going to occur by counting how many CPU cycles have elapsed.

Re:Google-cache article (4, Funny)

Bobb Sledd (307434) | more than 6 years ago | (#19826413)

and for those who dont have the time to read the paper...

it works by avoiding running during the exact moment of a clock tick (which would be the moment when CPU usage...


--Uhm... (looks at watch...) Say, I really don't have time for wordy summaries... could you maybe cut this down into about 10 words or less? Hurry it up! I ain't got all day!

Re:Google-cache article (2, Insightful)

Anonymous Coward | more than 6 years ago | (#19826641)

Kind of like Alt-Tabbing off Slashdot when the PHB strolls by?

Security! (1)

wal9001 (1041058) | more than 6 years ago | (#19825157)

Ha! I told you Mac OS was more secure. What? Of course I'm not a fanboy! What gave you that idea! Jeez.

Re:Security! (1)

TheRaven64 (641858) | more than 6 years ago | (#19825483)

If you just want to DoS the box as a local user (which is all this lets you do, from a security standpoint), then there are much easier ways of doing this on OS X via the VM subsystem. So easy that I've managed to do it with my own code a couple of times purely by accident and had to power cycle the box to stop the process (the same code runs fine on FreeBSD, by the way, it just chews up a lot of memory).

Re:Security! (1)

pasamio (737659) | more than 6 years ago | (#19825917)

I've noticed a similar thing, more that anything that causes heavy disk IO kills the system. More so when I've been experimenting with virtual machine implementations on the platform I've noticed heavy lag issues that I don't see on equivalent systems. Locking up a OSX box isn't too hard.

Re:Security! (0)

Anonymous Coward | more than 6 years ago | (#19825571)

At least this paper should help dispel that old "Mac OS X is BSD with eye candy" meme. While reading it, it's hard not to realize that XNU (the OS X kernel) and the BSD kernel are completely different beasts. Figure 1 in particular drives the point home: it shows that with respect to the timing model used, you have OS X and RTOSs on one side, and FreeBSD, Linux, Windows etc. on the other.

What the?! (4, Funny)

Rik Sweeney (471717) | more than 6 years ago | (#19825183)

Using up 99% of the CPU's easy!

#include

int main(int argc, char *argv[])
{
      while (1) {}

      return 0;
}

Re:What the?! (1)

CaptainPatent (1087643) | more than 6 years ago | (#19825251)

But YOU have the privilege to eat all of your system resources. The point of the article is that an unprivileged user can while-lock your system and your OS will have no idea.

Re:What the?! (2, Informative)

AKAImBatman (238306) | more than 6 years ago | (#19825285)

This is a bit different. It's a way to convince the OS to give you more time slices than you'd normally be allocated. e.g. If you ran that program of yours twice at the same priority level, both instances should get ~50% of the CPU time. If one of the instances implemented this privilege boosting scheme however, it would get to hog all the CPU time while your other spinlocked program starved.

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19825949)

what if we ran two of these "cheat 99%" programs together??

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19826759)

Then the authors would get another grant and start working on the "metacheat" program.

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19825299)

I hate it when people omit random nouns from sentences. Using up 99% of the CPU's what? Monthly download allowance? Precious, nonrenewable natural resources? Digestive enzymes? What??

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19825427)

I'm sorry that bothers .

would you like a ?

Re:What the?! (1)

jshriverWVU (810740) | more than 6 years ago | (#19825843)

processing cycles. CPU's don't download or do anything other than compute finite math in cycles so a description really isn't needed.

Re:What the?! (1)

dgatwood (11270) | more than 6 years ago | (#19826815)

x...<-joke
o
+...<-you
/\

The plural of CPU is CPUs, not CPU's. CPU's is the possessive form of CPU.

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19826931)

WTF, grammar police? Who cares. I dont get paid to care about an ' or not. It's the concept of the message that matters. This is a tech forum not an english discussion troll.

Re:What the?! (1)

woodchip (611770) | more than 6 years ago | (#19825321)

This is better... #include #include int main(int argc, char *argv[]) { while (1) {} { fork(); } return 0; }

Re:What the?! (1)

eneville (745111) | more than 6 years ago | (#19825685)

> This is better...
> #include #include
> int main(int argc, char *argv[]) {
> while (1) {} { fork(); } return 0; }

No, no, that is not better, not at all, none what so ever.

What you meant, I think is:

int i;
while(1) {
    i = fork();
    if( i == 0 ) { /* only the child spins */
        while(1) {}
    }
}

in your loop the parent spins, because it cannot fork(), i never leaves the loop.

Re:What the?! (2, Insightful)

MajinBlayze (942250) | more than 6 years ago | (#19825693)

or, just

$ :(){ :|:& };:

But that really isn't the point here. This lets your run any arbitrary program, using max resources, (despite scheduling), AND hide the fact that the process is using *any* resources

Syntax failure. (1)

Valdrax (32670) | more than 6 years ago | (#19826115)

Someone didn't preview and doesn't know how to use &lt; and &gt;.
Also, what's the deal with that empty block in between the "while (1)" and the "{fork ();}"?
Geez, if you're going to critique someone else's code, do a double-check on your own first.

Re:What the?! (0)

Anonymous Coward | more than 6 years ago | (#19825577)

For a time, my university's Linux lab had similar problems to this.

Each machine on the network allowed SSH access, and people would playfully log in remotely to each others' machines and execute something like

$ perl -e "fork while fork"

This would render the machine unusable.. until about a year back, when something changed - which leads me to suspect that the kernel has had protection against this sort of thing for a while now.

Re:What the?! (2, Informative)

Random832 (694525) | more than 6 years ago | (#19825809)

"fork while fork" won't have the exponential effect, since fork returns 0 (false) in the child process, terminating the loop and causing growth to only be linear. You'd need fork while true.

Per-user process limits (2, Insightful)

Valdrax (32670) | more than 6 years ago | (#19826185)

Besides the syntax comment the other poster said, it could've also been that the school implemented per-user process limits on the machine. Linux has had this capability for years and years; most people just don't bother setting it, but universities hosting machines for programming students pretty much have to set it for exactly this sort of thing, whether it be accidental or malicious.

Re:What the?! (2, Insightful)

francium de neobie (590783) | more than 6 years ago | (#19826211)

This would render the machine unusable.. until about a year back, when something changed - which leads me to suspect that the kernel has had protection against this sort of thing for a while now.

I guess they just put on a nproc limit on each user. It's just a trivial security measure against simple fork bombs. Assuming your Linux system uses PAM (most modern distros do), take a look at /etc/security/limits.conf.

Old news (3, Informative)

Edward Kmett (123105) | more than 6 years ago | (#19825221)

Not quite sure what justifies a paper out of this.

If you check the linux kernel mailing list for Vassili Karpov, you should find test cases that demonstrate this behavior and tools for monitoring actual CPU usage for a variety of platforms, though I notice no mention of any of that in the paper.

See http://www.boblycat.org/~malc/apc/ [boblycat.org] for the tool and 'invisible CPU hog' test case.

Re:Old news (5, Informative)

Anonymous Coward | more than 6 years ago | (#19825465)

Publishing papers takes a lot of time, as anybody who ever done it would know... For example, the post you mention is from Feb 2007. By then, according to the usenix-security call for papers, the paper has already been submitted. Also, google-ing "cheat" around revealed this technical report: http://leibniz.cs.huji.ac.il/anon?View=1&num=1&pid %5B1%5D=870&abstract=1 [huji.ac.il] (seems the initial version of the paper) which is dated May 2006.

ok (3, Interesting)

nomadic (141991) | more than 6 years ago | (#19825223)

Back in my day we called it renice.







Yes, I'm kidding. Please don't post a long reply explaining how renice differs from this cheat thing. It isn't necessary.

Re:ok (1)

Gazzonyx (982402) | more than 6 years ago | (#19825445)

Back in my day we called it renice. Yes, I'm kidding. Please don't post a long reply explaining how renice differs from this cheat thing. It isn't necessary.
My good sir, you take all of the fun out of trolling slashdot while at work! Now I have no excuse to avoid working on the dbase (Access and VBA, ugh). Jerk.

Re:ok (1)

nomadic (141991) | more than 6 years ago | (#19825739)

Now I have no excuse to avoid working on the dbase (Access and VBA, ugh).

Eesh, you have fun with that.

Re:ok (1)

MajinBlayze (942250) | more than 6 years ago | (#19825935)

Now I have no excuse to avoid working on the dbase (Access and VBA, ugh)
I was an "Access Developer" for a while, even a consultant doing the same (yes, I would sell my soul for a dollar)
Now I have a *Real* job as part of a programming team working with a *mostly* real RDBMS.

you will be in my prayers, brother-in-arms.

The "sue" command (1, Insightful)

Anonymous Coward | more than 6 years ago | (#19825303)

Finally, the "sue" command of PC UNIX has been implemented.

Re:The "sue" command (4, Funny)

db32 (862117) | more than 6 years ago | (#19825811)

This is an outrage. You cannot 'sue' without lawyerd! What about the required functionality of 'sue --counter' and 'appeal'?!

It was news,... in 1980 (1)

Ancient_Hacker (751168) | more than 6 years ago | (#19825353)

I seem to recall usenet discussions about this circa the time of !uucp!newsglop!..... It seemed the Unix scheduler would let certain IO operations hog the CPU. And if you somehow installed your app as a IO driver or IO completion routine, then your app could hog the CPU. Similarly since day one of Windows soundcards you could set your app to realtime_priority and everything else would suffer. Not exactly smokin' hot off the press.

Re:It was news,... in 1980 (1)

phasm42 (588479) | more than 6 years ago | (#19825721)

That's not what the paper talks about. The vulnerability is that the scheduler gathers statistics (used to make scheduling decisions) by checking who is running at every clock tick. By running only between clock ticks and never running at the time of a clock tick, your process can use a lot of CPU without the scheduler knowing.

Re:It was news,... in 1980 (1)

MajinBlayze (942250) | more than 6 years ago | (#19825773)

Yes, but did that cause the program to hide the fact that it was the process using up resources?
No. That's what makes this interesting. That, and the fact that the new multimedia friendly schedulers are what makes this possible.

First announced exploit.. (1, Funny)

SuperBanana (662181) | more than 6 years ago | (#19825383)

This year's Usenix security symposium includes a paper that implements a "cheat" utility, which allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring that the programs would get 99% of the CPU cycles, regardless of the presence of any other applications in the system, and in some cases (like Linux), in a way that keeps the program invisible from CPU monitoring tools (like 'top').

Next up, a virus which senses bad grammar and punishes you by using 99% of your CPU. Seriously, somewhere a middle school English teacher is crying, and doesn't know why.

Sounds great in some respects. (1)

jshriverWVU (810740) | more than 6 years ago | (#19825401)

I've even gone as far as to compiling a minimal Linux distribution for one of my test machines so my CPU intensive application can squeek out every last drop of performance as possible. Beyond the normal renice -20

Curious how this works.

Re:Sounds great in some respects. (2, Informative)

cnettel (836611) | more than 6 years ago | (#19825651)

It works by sleeping at the right point in time. You really hack up the timeslices and decrease the overall efficiency (more context switches), so it's only good if you want to steal cycles where you are not really allowed to.

Talk about a fair share scheduler ! (5, Insightful)

ivan_w (1115485) | more than 6 years ago | (#19825423)

I wasn't aware the schedulers for those systems were so deficient !

In my days (yes, I'm an old fart) - the schedulers had basic principles :

- Voluntary yielding led you to get accounted for the time you spent running.
- You could stay in the interactive queue for only a certain amount of time. After some amount of time had passed (a few secs) you were either bumped to non-interactive if you were running (with longer time slices but lower priority) or removed off the scheduler list for good (if the time spent there was idling). They had a special 'idle but interactive' (not eligible for dispatching) queue for that.
- Scheduling a new task restarted a new time slice

That particular scheduler even had a 3 queue system so that if you got accidentally bumped into the non-interactive queue or if your process was semi-interactive you had a better chance of gaining interactive status again. And they had a 'really' not interactive queue for those CPU hogging processes.

Of course this requires the hardware to have a precise timing feature (something with a granularity that is finer than the process interleaving time slice time and ideally in the magnitude of instruction execution). And this scheduler wasn't using time sampling and time quantums.. (but something more like the OSX timer on demand paradigm).

--Ivan

How It Works (5, Informative)

Shimmer (3036) | more than 6 years ago | (#19825453)

The cheat program hogs the CPU by using it when the host OS isn't looking. As a result, it avoids the scrutiny of the OS's scheduler and is actually given a priority boost by some schedulers because of its good behavior.

This is accomplished by sleeping for a fixed amount in between OS clock ticks. The timeline looks like this:
  1. Hardware is set to generate a "tick" event every N milliseconds.
  2. Tick event occurs, which is handled by the OS.
  3. OS notes which process is current running on the CPU and bills it for this tick.
  4. OS wakes up cheating process, which is currently sleeping, and allows it to run.
  5. Cheating process runs for M (< N) milliseconds, then requests to go to sleep for 0 milliseconds. This causes the cheating process to sleep until just after the next tick.
  6. Repeat from step 2 above.

Tickless? (1)

Azuma Hazuki (955769) | more than 6 years ago | (#19825873)

I recently saw a "tickless" option in the kernel config. Would using that solve this problem? I'm not a kernel hacker by any means; knowing enough to run a clean Gentoo with no issues doesn't necessarily imply programming talent.

Re:How It Works (0)

Anonymous Coward | more than 6 years ago | (#19826709)

I wonder why this works at all.

First of all, the best solution would be to measure the time (obviously not in ticks, but something smaller) a user process has taken since the last tick.

If that is not possible, because a tick is so damn short, then simply charge all user processes one tick. A normal (non-cheating) process should not be activated and pre-empted more than a few times, so the few ticks it "loses" won't hurt a bit.

Back at NYIT we hacked the "nice" command... (2, Funny)

Thagg (9904) | more than 6 years ago | (#19825485)

We had a user who insisted on abusing the "nice" command, to run his jobs at a higher priority. Pleading and cajoling didn't work, so we decided to get creative.

We changed nice so that whenever this particular user ran it, it lowered his priority by exactly as much as he was attempting to raise it.

He stopped coming to work soon after that. I suppose he had the last laugh though -- NYIT continued to pay him for another six months.

Thad

Re:Back at NYIT we hacked the "nice" command... (1)

Random832 (694525) | more than 6 years ago | (#19825865)

What system is this that allows "nice" to raise priority for users other than root?

And, you do realize that "nice" with a positive argument lowers priority.

sweet! (1)

SolusSD (680489) | more than 6 years ago | (#19825489)

Does it work on Solaris? If so I can run my sparse distributed memory simulator on the comp sci depts main server without waiting hours to get results!

Re:sweet! (1, Informative)

Anonymous Coward | more than 6 years ago | (#19825911)

The article seems to indicate that the cheat gets more throughput non-cheating threads on Solaris 10. However, it appears that it would be trivial to reveal such a cheat with the dtrace sched provider and one of the probes such as remain-cpu

http://docs.sun.com/app/docs/doc/817-6223/6mlkidll 8?a=view [sun.com]

*BSD? (1)

KlaymenDK (713149) | more than 6 years ago | (#19825501)

All prevalent operating systems but Mac OS X are vulnerable
How does this reflect on the BSDs? (FreeBSD for being the closest relative, and OpenBSD for its goal of trying "to be the #1 most secure operating system")

Re:*BSD? (0)

Anonymous Coward | more than 6 years ago | (#19825551)

It's not a BSD kernel, only a BSD userland, so the BSD kernel probably is vulnerable, or else not prevalent ;)

Re:*BSD? (0)

Anonymous Coward | more than 6 years ago | (#19825629)

FreeBSD for being the closest relative

MacOS is not FreeBSD. It's got a Mach kernal. It just uses lots of bits out of FreeBSD, but not the ones in question

and OpenBSD for its goal of trying "to be the #1 most secure operating system"

This looks like an efficiency issue, not a security issue.

Re:*BSD? (1)

KlaymenDK (713149) | more than 6 years ago | (#19825785)

FreeBSD for being the closest relative
MacOS is not FreeBSD. It's got a Mach kernal.
I know, but in broad terms (especially in peoples' minds) it's still seems to be "the closest" (just as "the best" Lotus Notes database need not be "a good" database ;-p ).

and OpenBSD for its goal of trying "to be the #1 most secure operating system"
This looks like an efficiency issue, not a security issue.
And yet, hogging the CPU might be indistinguishable from a DDOS -- at least in the perspective of other users.

Re:*BSD? (0)

Anonymous Coward | more than 6 years ago | (#19825631)

The paper shows how they fooled FreeBSD, which was somewhat harder

Here's the difference (2, Informative)

KlaymenDK (713149) | more than 6 years ago | (#19825893)

(reply to self after RTFA)

What 'saved' the Mac OS was its different use of timing triggers. "All" other OS'es use one common steadily ticking clock as a dealer of time slots. This allows the cheat to "skip to the start of the line (queue)" every time it's had its turn.

OTOH, the Mac uses a stack of alarms set to specific points in the future, and polled in order as they occur. So the difference on Mac OS is that there's no skipping the queue, it's rather "there is no queue, we'll call you when it's your turn".

I don't know the details of the OpenBSD scheduler, but it's very likely the same (clock tick) method as used by the rest of the susceptible OS'es.

Linux 2.6.21 is probably immune too (5, Informative)

Wyzard (110714) | more than 6 years ago | (#19825553)

According to the paper, the reason Mac OS X is not vulnerable is that it uses one-shot timers scheduled for exactly when the next event needs to occur, rather than periodic "ticks" with a fixed interval between them. The "tickless idle" feature introduced in Linux 2.6.21 (currently only on x86, I believe) takes the same approach, and very possibly makes Linux immune too.

(Ironically, immediately after discussing OSX's ticklessness, the paper mentions that "the Linux 2.6.16 kernel source tree contains 8,997 occurrences of the tick frequency HZ macro, spanning 3,199 files", to illustrate how difficult it is to take a tick-based kernel and make it tickless. But those kernel hackers went and did it anyway.)

The tickless feature isn't yet implemented on all architectures that Linux supports, though. I think AMD64 support for it is supposed to come in 2.6.23, along with the new CFS scheduler.

Re:Linux 2.6.21 is probably immune: RDTSC? (2, Informative)

redelm (54142) | more than 6 years ago | (#19826177)

A very cheap simple patch is add RDTSC instructions at process restart and blocking syscall to count the cycles actually used. That way the extensive tick-code doesn't need to be modified.

Wait a minute (0)

Anonymous Coward | more than 6 years ago | (#19825555)

If this program consumes CPU cycles, but it doesn't leave any indication that it does, how do I know that it works?

Re:Wait a minute (0)

Anonymous Coward | more than 6 years ago | (#19825827)

measure the time it takes until the program ends with/without cheating.

How long till the patch is in place? (0)

Anonymous Coward | more than 6 years ago | (#19825557)

How likely is it that cheat has already been in the wild for a while?

I have noticed that my CPU on some tools show ~5 minute bursts of 100% usage, but top sorted by cpu usage peaks at ~5% usage and shows 90% idle.

One Clock to Rule the ALL (1)

deweycheetham (1124655) | more than 6 years ago | (#19825699)

Tick Based Accounting v.s. Time Sliced/Sample Based Billing

(Reminds me of some Zombies Processes I have seen in the past.)

Fixed recently in Linux (4, Informative)

iabervon (1971) | more than 6 years ago | (#19825705)

They took too long to publish this. Linux 2.6.21 (released in April) added support for using one-shot timers instead of a periodic tick, so it avoids the problem like OS X does. In addition to resolving this issue, tickless is important for saving power (because the processor can stay in a low-power state for long enough to get substantial benefits compared to the power cost of starting and stopping) and for virtual hosting (where the combined load of the guest OS scheduler ticks is significant on a system with a large number of idle guests). As a side effect, while the accounting didn't change at that point, the pattern a task has to use to fool the accounting became impossible to guess.

The CFS additionally removes the interactivity boost in favor of giving interactive tasks no extra time but rather just quick access to their available time, which is what they really benefit from.

Re:Fixed recently in Linux (0)

Anonymous Coward | more than 6 years ago | (#19826073)

from reading the CFS documentation, I suspect Ingo read (or at least heard) of this paper, which is available on-line for more than a year according to one of the comments above. this is probably what Ingo means by saying "the CFS scheduler is not prone to any of the 'attacks' that exist today" see http://kerneltrap.org/node/8059 [kerneltrap.org]

How To Defend Against This Attack (1)

Shimmer (3036) | more than 6 years ago | (#19825723)

The crux of the problem is that the OS uses statistical sampling to account for CPU usage by user processes. Since the sampling occurs at regular intervals, it can be avoided by a cheating program. I can see two possible defenses against this:
  1. Modify the sampling mechanism so that it occurs at irregular intervals. This makes it difficult (but probably not impossible) for the cheater to avoid the sampler. (Apparently, the Mac OS uses this technique, although not for security reasons.)
  2. Modify the accounting algorithm so that it is not statistical. Since the OS is responsible for waking/sleeping all processes, it can know exactly how much CPU time each one is using. This would completely eliminate the problem.

Re:How To Defend Against This Attack (0)

Anonymous Coward | more than 6 years ago | (#19826915)

Not at all. The point is that Mac OS X does NO sampling (or ticking), not that it does it "irregularly". Since it does not use ticks, it doesn't sample, but uses accurate measuring.
It's true, however, that they don't do this for security reasons, but for the other benefits one-shot timing provides.

Hmm... (0)

Anonymous Coward | more than 6 years ago | (#19825745)

Sounds like somebody's discovered Java!

The sysadmin's best defense isn't a new scheduler (1, Funny)

Anonymous Coward | more than 6 years ago | (#19825853)

It's a baseball bat.

It doesn't even matter if these CPU-hogging processes can hide from "top" - you should already be making regular rounds of your users, even the ones you haven't caught doing anything wrong. Nobody questions it when you tell them, "You know what you did." Not when you're the one with the bat.

Summary and Questions (5, Informative)

Aaron Isotton (958761) | more than 6 years ago | (#19826011)

The paper is quite long, so here's a summary (take this with a grain of salt, who wants accurate information should still RTFP):

Most OSes (Linux, Solaris, Windows but not Mac OS X) are tick-based. This means that the kernel is called from hardware periodically (this is the "HZ" value you set in the Linux kernel). Some of them (Linux) simply check which process is running at each tick and compute statistics based on that ("sample-based statistics"). This means that the process running when the tick happens is billed for the entire period of the tick.

Since ticks are typically "long" (typically 1-10 ms on Linux) more than one process may run during this period. In other words, using this approach leads to inaccuracies in the process billing. If all programs "play by the rules" this works quite well on average though.

Next thing: the classic schedulers typically maintain some sort of "priority" value for each process, which decreases whenever the process is running and increases when it's not. This means that a process runs for some time, its priority decreases, and then another process (which hasn't been running for some time) takes over.

You can exploit that by always sleeping when a tick happens and running only in-between ticks. This makes the kernel thinks that your process is never running and give it a high priority. So, when your process wakes up just after a tick happened, it will have a higher priority than most other processes and be given the CPU. If it goes to sleep again just before the next tick, its priority will not be decreased. Your process will (almost) always run when it wants to and the kernel will think that it's (almost) never running and keep its priority high. You win!

Another aspect is that modern kernels (at least Linux and Windows) distinguish between "interactive" (e.g. media players) and "non-interactive" processes. They do so by looking how many times a process goes to sleep voluntarily. An interactive program (such as a media player) will have many voluntary sleeps (e.g. inbetween displaying frames) while a non-interactive program (e.g. a compiler or some number crunching program) will likely never go to sleep voluntarily. The scheduler gives the interactive programs an additional priority boost.

Since the cheating programs go to sleep very often (at every tick) the kernel thinks they're "very interactive", which makes the situation worse.

Some of the analyzed OSes - even if tick-based - do not use sample-based statistics in the kernel but they do use sample-based statistics for scheduling decisions. So the kernel sees that a process is taking more CPU than it should but it will still keep on scheduling it.

Mac OS X is not affected because it has a tickless kernel (e.g. without periodic interrupts). Because of that sample-based statistics don't work and it has to use accurate statistics, which make it unaffected by the bug.

This bug can be exploited to (at least)

- get more CPU than you're supposed to
- hinder other programs in their normal work
- hide malicious programs (such as rootkits) which do work in the background

Here's a list with the OSes (this USED TO BE a nicely formatted table, but the darned Slashdot "lameness filter" forced me to remove much of the nice lines and the "ecode" tag collapses whitespace).

OS, Process statistics, Scheduler decisions, Interactive/non-interactive decision, Affected
Linux, sample, sample, yes, yes
Solaris, accurate, sample, ?, yes
FreeBSD 4BSD, ?, sample, no?, yes
FreeBSD ULE, ?, sample, yes, yes
Windows, accurate, sample, yes, yes
Mac OS X, accurate, accurate, not needed?, yes

I guess that Mac OS X doesn't need a interactive/non-interactive distinction because of its different (tickless) approach. I assume that interactive applications can (implicitly or explicitly) can be recognized as such in a different way. Does anyone have more information on that?

How does tickless Linux compare? What about CFS? The paper is about 2.6.16.

RANT: The slashdot lameness filter sucks. And the fact that whitespace inside an ecode tag is collapsed sucks even more. This should be turned off for registered users (or maybe just registered users with a positive karma).

Re: MOD PARENT UP! (0)

Anonymous Coward | more than 6 years ago | (#19826415)

n/t

Clever but what loss? (2, Insightful)

redelm (54142) | more than 6 years ago | (#19826093)

Yield()ing just before timer tick is a neat trick to grab cycles, but what use are cycles? This might have been interesting on time-share machines 20 years ago. But now cycles are in gross surplus on most machines. And processes carefully controlled on loaded machines. Until this piggy can be remotely deployed, it isn't much of a hazard.

A very simple patch is to issue RDTSC instructions at process restart and blocking syscall to count the cycles actually used. That way the extensive tick-code doesn't need to be modified.

Malware (1)

Wesley Felter (138342) | more than 6 years ago | (#19826171)

The point of the paper is that you could have some malware using 99% of your CPU and it wouldn't even show up in top.

Re:Malware (1)

redelm (54142) | more than 6 years ago | (#19826253)

I think it shows in `top` as sleeping. What malware needs cycles? Mostly they want ports (esp 25 SMTP outbound) or perhaps disk (searching). Protect the resources that need protecting!

Re:Clever but what loss? (0)

Anonymous Coward | more than 6 years ago | (#19826605)

Firstly, cycles are valuable when e.g. users share a cluster. Secondly, rdrsc has legitimate use scenarios, and many applications that need finer timing resolution than provided by gettimeofday depend on it. You can't just block it...

Re:Clever but what loss? (1)

m50d (797211) | more than 6 years ago | (#19826701)

As others pointed out, this could be very useful on shared hosting.

Way back in the '90s (2, Interesting)

kithrup (778358) | more than 6 years ago | (#19826629)

Chris Torek gave a presentation at UseNIX about how a constant quantum could result in a process having its CPU usage unaccounted.

His solution was to use a randomized quantum. Not unique per process, but randomized when the kernel starts running each process. That gave you a better accounting of the CPU time (statistics, doncha know :)), but also made this kind of attach much, much harder.

I'm somewhat disappointed that I did not see Chris and Steven's paper referenced in this one. (I believe that the title of that paper was "Randomized Sampling Clock for CPU Utilization Estimation and Code Profiling," for those who care to find it.)

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...