Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Well Do Most OSes Handle Resource Management?

Cliff posted more than 13 years ago | from the running-a-tight-and-efficient-system dept.

Unix 10

schlika asks: "After getting some troubles with a highly loaded Web server running Linux 2.2.12 I read some infos about its max thread/processes limitation and 'not-so-great' virtual memory management. Could some of you comment such issues with other OS's such as FreeBSD (which everyone says it's better for that task without ever giving explanations), OpenBSD, NetBSD, Solaris, Linux2.4 and others? Please give real world examples/comparisons if possible."

Sorry! There are no comments related to the filter you selected.

Re:FreeBSD is nice (1)

Chaostrophy (925) | more than 13 years ago | (#517538)

Linus has said the 2.4 kernel will be vastly better under high loads. He reported recently that he did some testing to see how responsive it was under high loads, and it did much better than the 2.2 series did. This was a perception test, how it felt, so no hard numbers are possible.

The bsd's do their thing well.

But I'm happy with linux.

Re:Unix (1)

whydna (9312) | more than 13 years ago | (#517539)

Linux has this same setup also. Specifying the max amount of ram a user can use, etc. It works but I don't think it's what the original poster needs.

Re:Unix - Solaris - Workload Manager (1)

AtariDatacenter (31657) | more than 13 years ago | (#517540)

FWIW, Solaris has a (extra purchase) workload manager that'll do exactly that. You divide the system up into shares, and say what groups get what shares of resources (memory, cpu, io). Then you can drill down further into individual users. And if there are idle resources in one group, because you're using shares and not percentages, they can be used by other groups.

Re:FreeBSD is nice (1)

Pygmy Marmoset (65910) | more than 13 years ago | (#517541)

Troll eh? I hope that gets hit in metamod. It's not like I tried to hide the fact that I was linking to a sex site -- if you can't figure out that "sextracker" is porn related..

Ah well, such is life.

Tuning (1)

pete-classic (75983) | more than 13 years ago | (#517542)

At this time you still have to do some tuning if you are really going to sock the crap out of a Linux webserver.

With 2.2 you have to do some (minor) kernel hacking and some OS tweaking to increase the number of processes and open files.

It is also important to note that each process consumes a file(handle?) so that can become a limitation.

Start by doing ulimit -a, and if you can only have 256 processes (or your webserver can only have 256) you aren't tuned.

From there, search google for "nproc linux kernel apache" or something for more info.


Re:Unix (1)

naspa (265226) | more than 13 years ago | (#517543)

You are right on most points, but slightly offtopic. The problems you mention pertain to the (cheap) PC architecture, which you compare to (expensive) mainframes. Also, the tasks most workstations and servers have to face are "online", so there is not much scheduling ahead for the entire night to be done.

I think unix does a more than decent job for the given hardware and for the given problems. If you knew ahead all the jobs and their requirements, there are a hundred algorithms in any OS book on how to attain optimality - it is a solved problem, really. Compare this with "online" problems, when the OS gets the request that has to be answered ASAP, and the best you can do is multiplex everybody.

Modern unices do not spend more than 0.1% on task switching. I also do not believe your figure of long term average of 30% machine utilization under unix. If you mean that the rest is spent thrashing the swap, then this is part of the problem to be sold, on the existing hardware. It is physically impossible to serve, say random 20M requests from a 1000M disk using 8M of RAM, without thrashing the disk (if you want the illusion of responsiveness, which means multiplexing). The mainframe is not the silver bullet, you are just applying today's comparative power to 20yr ago problems. The mainframes are still out there, in the number crunching community, overnight database jobs, etc, but your average office, home, university, company lab relies on unix/windows.

Re:Unix (2)

main() (147152) | more than 13 years ago | (#517544)

Well, FreeBSD (and others) have login.conf for specifying resource limits.

Admittedly, these only operate per-user, not per-process. A lot of people choose to run important processes under a specific user anyway (news, bind).


Remember how things used to be? (2)

shippo (166521) | more than 13 years ago | (#517545)

Remember how thing used to be not many years ago?

Disk caches statically set at boot-time, to change the cache size needing a reboot. Calculating how much memory your processes would need, and using the rest for cache. Finding the optimum value was not easy, particularly if process sizes varied over the day.

We also had to cope with resource restrictions such as concurrent socket connections, concurrently opened files, Inode tables and so on. This could only be changed by tweaking a kernel parameter, and some could not even be monitored easily. It wasn't always easy to guess the best value to set these to, given that increasing them increased the memory used by the kernel, the memory could not be used elsewhere, and memory was not as cheap and plentiful as it is now.

They don't know they're born today! And yes, I am a Yorkshireman (born on Yorkshire day too!).

FreeBSD is nice (3)

Pygmy Marmoset (65910) | more than 13 years ago | (#517546)

I starting working at a FreeBSD shop ( [] , parent company of [] ) coming from a linux background.

We do some pretty amazing things with freebsd. We have tons of servers doing between 100-300 reqs/second with apache, others that have done 40-50Mbit sustained (on a 100Mbit NIC, Intel hardware), and all kinds of other crazy stuff.

I've ssh'd into various machines that were getting hammered and the load was in the 700s with disk/swap/ram/cpu all taking a beating, and I was still able to do what I needed. When I've dealt with linux machines under conditions nowhere near as bad as that, it was a total nightmare to even get logged in, let alone do anything.

I still use linux for my workstation, because I love the desktop goodies, games, and debian, but for high performance servers it's hard to beat freebsd.

Unix (4)

sql*kitten (1359) | more than 13 years ago | (#517547)

The answer is (and this is a bit of a rant) that almost all Unix implementations handle resources terribly.

Unix allocates CPU time clumsily, nice and pbind are about as much control as a sysadmin has over a running process, other than stopping it altogether and restarting it. Contrast this with OS/390 or VMS where the sysadmin can control exactly how much CPU a process gets, the size of its working set, migrate processes around between nodes in a cluster. IBM have a tool called the "Work Load Manager". It is able to configure your system based on what you want to do, not how you want to do it. For example, you say that this batch job must complete by this time in the morning, this class of transaction must complete within this time, and this group of users get mo more than 10% of CPU in the morning, and 30% in the afternoons, and WLM will configure your cluster, if it is physically possible, to do it. You can run a mainframe class OS at 90% of the machines capability consistently, a Unix system rarely exceeds 30% of its capacity when averaged over a period of time, it simply spends too much time either waiting for things or trying to manage its own workload. And what's worse, the CPU gets involved in every I/O in a Unix system, because of the way buffers work. Every disk block gets transferred by a CPU through an operating system buffer. When you edit on a UNIX box, every single character goes to the CPU and gets echoed back. And on the network, even character gets a packet sent back and forth. VMS deals with "record" - whole lines of text, even at the network protocol level.

And don't even get me started on the lost+found directory. You don't get that on an industrial grade file system, because it's journalled to ensure consistency.

Thankyou for listening.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?