Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!



Ask Slashdot: Is an Open Source .NET Up To the Job?

DuSTman31 Re:Why bother? (421 comments)

>why is Apache still spawning processes for every request that comes in... don't they realize the overhead of that??

I'm not sure if you're serious about that or not, however

  • Since Apache2 you could change multiprocessing modules. The prefork MPM does the equivalent of apache1 and makes a number of forked off worker processes.
  • Apache1/Apache2-prefork don't use one process per request unless you specifically request it to. Worker processes are retired after a number of requests (usually a couple hundred) as this helps to contain the impact of any memory leakage that may have been accidentally introduced etc
  • Apache1/Apache2-prefork doesn't wait for a request to spawn the new worker processes, It spawns them in the background and hands incoming requests over to them. An incoming request doesn't have to wait for a worker process to spawn
  • Though you can argue that windows is reasonably performant in general, one primitive that linux that's far faster is process creation. Processes are meant to be cheap in unix systems, and used as such.
  • Separating things out into multiple processes helps to contain the effect of any bugs. Worker processes can crash all they want and won't impact the service as a whole.

I tend to use prefork MPM on servers for the isolation, unless I'm expecting a tonne of traffic, but by all means use the worker MPM, which uses a large amount of threads inside a single process.

about a month ago

If ET Calls, Who Speaks For Humanity?

DuSTman31 Me (371 comments)

I'll do it.

more than 4 years ago

A Good Reason To Go Full-Time SSL For Gmail

DuSTman31 Cache relevancy depletion (530 comments)

One thing that I find somewhat counterproductive is that browsers do not save files sent over SSL in their caches.

It's sensible, I suppose, to assume that if something's sent over an SSL channel that it's sensitive and therefore shouldn't be saved, but it would give a speed and bandwidth efficiency hit which would deter usage of SSL for everyday browsing.

You could, of course, have the HTML transmitted over SSL and the supporting images over plain HTTP, but then the browser will scare people by warning that not all content on the page is secure..

I think browsers should start looking at encrypting their cache files, so that stuff such as SSL can be accommodated without breaking caching.

more than 6 years ago


DuSTman31 hasn't submitted any stories.


DuSTman31 has no journal entries.

Slashdot Login

Need an Account?

Forgot your password?