Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Covalent's Version of Apache 2.0 To Drop Monday

timothy posted more than 12 years ago | from the this-will-get-confusing dept.

Apache 85

kilaasi points out this CNET story about the planned release on Monday of Apache 2.0, "or at least the version that has proprietary extensions. Covalent sells the core of Apache and its own extensions which make it easier to adapt for specific areas and simpler to administer. Covalent is confident that the next generation Apache is mature and is ready for prime time. Covalent employs some of the core members of the Apache-development-team." XRayX adds a link to Covalent's press release, writing: "It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site." Update: 11/10 16:37 GMT by T : Note that the product name is Covalent Enterprise Ready Server; though it's based on Apache software, this is not Apache 2.0 per se. Thanks to Sascha Schumann of the ASF for the pointer.

Sorry! There are no comments related to the filter you selected.

FP! (-1, Troll)

Troll XP (535651) | more than 12 years ago | (#2548262)

:(

Final Fantasy VI (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2548264)

Is the best game ever made. Kefka rules.

Frost Piss! (-1)

The Turd Report (527733) | more than 12 years ago | (#2548266)

Well, it is a beautiful Saturday morning here in Ashburn, Virginia. How is everything in your part of the world?

Re:Frost Piss! (-1)

c_g_hills (110430) | more than 12 years ago | (#2548284)

Nasty gray skys here in Birmingham, England. I can't wait to move back to Sydney, England is an awful place to live.

Re:Frost Piss! (-1)

The Turd Report (527733) | more than 12 years ago | (#2548294)

Sorry to hear that. I hope things clear up, or you get the hell out of there soon.

.

.

.

.

Slashdot requires you to wait 2 minutes between each successful posting of a comment to allow everyone a fair chance at posting a comment. It's been 1 minute since you last successfully posted a comment If you this error seems to be incorrect, please provide the following in your report to SourceForge.net: Browser type User ID/Nickname or AC What steps caused this error Whether or not you know your ISP to be using a proxy or some sort of service that gives you an IP that others are using simultaneously. How many posts to this form you successfully submitted during the day * Please choose 'formkeys' for the category! Thank you.

.

.

.

Re:Frost Piss! (-1)

King Africa (262341) | more than 12 years ago | (#2548314)

This is what is going on in my country today:

Heavy fighting between christians and muslims left 2600 dead...
A cult killed 400 people...
1000 tribesmen killed during uprising against the government...
Estimated 2000 rapes/day...
11% of the population has HIV...

Re:Frost Piss! (-1)

The Turd Report (527733) | more than 12 years ago | (#2548342)

So, it is an average day there, eh?

Me (-1)

c_g_hills (110430) | more than 12 years ago | (#2548270)

I am not a troll :(

This is a Troll Article (-1)

The Turd Report (527733) | more than 12 years ago | (#2548287)

Only Trolls have posted so far. I claim this article in the name of the Troll Empire!

Re:This is a Troll Article (-1)

Fucky the troll (528068) | more than 12 years ago | (#2548299)

Power to the trolls!




Important Stuff: Please try to keep posts on topic. Try to reply to other people comments instead of starting new threads. Read other people's messages before posting your own to avoid simply duplicating what has already been said. Use a clear subject that describes what your message is about. Offtopic, Inflammatory, Inappropriate, Illegal, or Offensive comments might be moderated. (You can read everything, even moderated posts, by adjusting your threshold on the User Preferences Page)

Re:This is a Troll Article (-1)

c_g_hills (110430) | more than 12 years ago | (#2548409)

Plz see my previous post.

Re:This is a Troll Article (-1)

The Turd Report (527733) | more than 12 years ago | (#2548415)

C'mon. Admit that you are a troll. There is nothing wrong with that. And you will not be a hypocrite like all the editors and slashbots here.

Static PHP + scripts running as users (5, Informative)

chrysalis (50680) | more than 12 years ago | (#2548290)

One of the most annoying thing in Apache 1.x is that when PHP is compiled in the server (not run through the CGI), all scripts are running as "www", "nobody", or whatever anonymous user your Apache daemon is running as.
There's no way to have PHP script run as different users (just like what suexec does for spawning CGI external progs) .
Sure, PHP has a so-called "safe-mode", but it's still not that secure, especially when it comes to creating files or acess shared memory pages.
I was told that Apache 2.0 had a mechanism that could make user switching for PHP scripts possible. Has anyone experimented with it?

Re: Static PHP + scripts running as users (3, Informative)

Akardam (186995) | more than 12 years ago | (#2548352)

There's no way to have PHP script run as different users (just like what suexec does for spawning CGI external progs)

Actually, there is. You have to use PHP in CGI mode, where it ISN'T compiled into Apache as a module. I've never used it in that mode myself (I only have one simple PHP script on my entire server); however, a search on google for php+suexec [google.com] turns up some info. Apparently, CGI mode does work, but not quite as well as module mode (some people seem to indicate that it runs like a dog).

-1 Redundant (0)

Anonymous Coward | more than 12 years ago | (#2554126)

You and whoever gave your post two mod points should re-read what you're replying to. It specifies. What part of "when PHP is compiled in the server (not run through the CGI)" didn't you understand?

Re:Static PHP + scripts running as users (1)

zaphod123 (219697) | more than 12 years ago | (#2548432)

At my work (an ISP), I tweaked cgi-wrap and run php. The cgi-wrap tweak provides the safety of running as the user, along with other checks (is the php script world writable, is it owned by the user, etc) and it takes out the necessity of putting #!/path/to/php at on every php file.

If you are interested in this, email me.

Re:Static PHP + scripts running as users (1)

wolruf (30926) | more than 12 years ago | (#2548434)

I've run PHP+suEXEC since PHP 4.0.1RC2 as far as I remember. Works fine on NetBSD with Apache 1.3.x although a little slower than compiled as module.
But we run PHP scripts the same way we run CGI written in C, Python, Perl, etc.

Re:Static PHP + scripts running as users (4, Informative)

cehf2 (101100) | more than 12 years ago | (#2548474)

With any application running on a web server there is a trade off between performance and security. because the PHP module is running inside the core of the web server, it should be fairly fast, however if you want the ability to change what users the php scripts run as, your only option is to use CGI scripts. CGI by its very nature is *very* slow. This is due to the overhead of the fork/exec/load program.

You may also be able compile PHP as a FastCGI program, you could then run several external FastCGI's as different users and configure Apache to run the particular script with a particular FastCgi program. I have no idea how to do this with apache, as I use Zeus [zeus.com] myself.

If Apache 2 does have a way to switch users for PHP scripts, it will not be secure. Under UNIX, once you have dropped your permissions you can never gain them again. The work around is to have 'real' and 'effective' users that programs run as. As long as you only change your efective user, you can re-gain permissions, but anything can regain permissions. You can also only change users when you are root. This would be a big security hole, in that if there was a buffer overflow attack root could trivially be optained by anyone.

security, performance, configurability - pick 2

Re:Static PHP + scripts running as users (1)

The Madpostal Worker (122489) | more than 12 years ago | (#2548638)

Actually the way they do it is there is an MPm called perchild. With the perchild scheme each process runs under a different userid(to replace suexec), so you would have php scripts run as a different user.

You can see more about MPMs here [apache.org]

Re:Static PHP + scripts running as users (2)

Ian Bicking (980) | more than 12 years ago | (#2565410)

I think you overestimate the intrinsic speed problems of CGI. Sure, you have to start a new process, but that doesn't take that many resources. If you have to start up a complicated interpreter, as you would for PHP, then yes it's slow. But a small C program starts fairly quickly.

When testing different adapters for an application server I was playing with, there were persistent versions written in Python, for use with mod_python/mod_snake -- the adapters were essentially small scripts that contacted the application server. Those persistant Python versions were actually slower than an equivalent C CGI program. Of course, the C version built as an apache module was somewhat faster, but they were both at the point when neither was a significant bottleneck. So CGI can be pretty fast.

You can actually do what is essentially CGI through PHP too -- if you have something that needs to be run suid, then run it through system() (which loads up a shell, which is annoying and slow) or some other way (I don't know of a way to call a program directly in PHP...?)

Or you can go the FastCGI (or FastCGI-like) direction, where you have a sub-server that handles certain requests. I don't know how easy that is to do in PHP -- it's very useful to have object serialization at that point, and I don't think PHP has that (?)

Re:Static PHP + scripts running as users (0)

Anonymous Coward | more than 12 years ago | (#2548569)

You can "run" each virtual host under a different user / group in Apache 2.0.

http://httpd.apache.org/docs-2.0/mod/perchild.ht ml

At $1495 per CPU (4, Funny)

imrdkl (302224) | more than 12 years ago | (#2548292)

This thing better weave with golden thread(s)

Re:At $1495 per CPU (4, Informative)

GC (19160) | more than 12 years ago | (#2548300)

Yes quite, for those of you who just want to download Apache 2.0, compile it and have it running by the time you could have bought the package from Covalent, go here [apache.org]

Re:At $1495 per CPU (0)

Anonymous Coward | more than 12 years ago | (#2548376)

Yeah. Imagine how much a beowulf cluster of Apache servers would cost.

Re:At $1495 per CPU (1, Troll)

Wesley Felter (138342) | more than 12 years ago | (#2548740)

At that price, you might as well buy Zeus, which is based on an even faster event-driven architecture.

Apache has released 2.0 betas (1, Informative)

Kenny Austin (319525) | more than 12 years ago | (#2548295)

"It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site."

Here is apache 2.0 documentation [apache.org] and you can download [apache.org] 2.0.16 (public beta) or 2.0.18 (it's an alpha).. but what do you want them to open source? The 2.0 core (it is) or the proprietary enhancements (yeah right).

Kenny


at least slashdot didn't change my urls into http://slashdot.org/httpd.apache.org this time.

Re:Apache has released 2.0 betas (5, Informative)

huftis (135056) | more than 12 years ago | (#2548341)

It's not clear when the Open Source Edition (or whatever) will come out and I didn't find anything at the official Apache Site.

Apache Week has more information [apacheweek.com] on this:

Those waiting since April for a new 2.0 beta will have to keep on waiting after another release candidate, 2.0.27, was abandoned this week when a bug was discovered while running the code on the live apache.org server. Some httpd processes were found to be stuck in infinite loops while reading POST requests; the bug was traced to the code handling request bodies. After fixes for this bug and a build problem on BSD/OS were checked in, the tree was tagged ready for a 2.0.28 release.

next generation Apache ready for prime time?? (0, Troll)

sydneyfong (410107) | more than 12 years ago | (#2548303)

Covalent is confident that the next generation Apache is mature and is ready for prime time.

And i thought it was already mature enough, otherwise why did it become the most popular [netcraft.com] web server?

btw, what is new with apache 2.0? (i am too lazy to read the docs ;-)

Re:next generation Apache ready for prime time?? (2)

GauteL (29207) | more than 12 years ago | (#2548315)

The current generation of Apache is of course mature. The _next_ generation has so far not proven itself. I'm always a bit sceptical when commercial entities release software based on _beta_ or even _alpha_ free software.

Re:next generation Apache ready for prime time?? (2)

baptiste (256004) | more than 12 years ago | (#2548346)

I'm always a bit sceptical when commercial entities release software based on _beta_ or even _alpha_ free software.

*cough*Netscape*cough* Though I use Mozilla as my primary browser and love it, NS 6.00 off M1x was still a bonehead move IMHO.

Re:next generation Apache ready for prime time?? (-1)

The Turd Report (527733) | more than 12 years ago | (#2548367)

Beacuse it is free and OSS freaks don't pay for squat.

Holy Fucking Shit!!!1!!1! (-1)

The Turd Report (527733) | more than 12 years ago | (#2548306)

What?!? Is Slashdot actually plugging a NON-OSS piece of software?!? How much did Covalent pay for that? What does RMS have to say about this? Is he going to try to get on Covalent's Board of Directors to right this horrible injustice to OSS and GNU/Freesoftware?!??!!?!

.

.

Lameness filter encountered. Post aborted!
Reason: You can type more than that for your comment.

Time warp? (5, Funny)

carm$y$ (532675) | more than 12 years ago | (#2548321)

From the press release:
SAN FRANCISCO -- November 12, 2001 -- In conjunction with the launch of Enterprise Ready Server, Covalent Technologies today announced a coalition of support for its new enterprise solution for the Apache Web server.

Is this a little bit confusing, or what? I mean, I had a meeting on Monday the 12th... well... which I don't recall yet. :)

jesus christ! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2548333)

can slashdot hire a graphic artist or web designer with any sense of style or taste?

Once I clicked on this section, I thought I had an acid flashback or something.

Thse colors are disgusting.

I mean, purple text? C'mon now!

But the YRO section really takes the cake for butt ugliness.

Linux is evil! (0, Troll)

Troll XP (535651) | more than 12 years ago | (#2548348)

I, a master cracker, has broken in to linus torvolds linux development network and i have SHOCKING news!

I came across kernel 2.9.99, the beta version of L3!!! L3 is not GPL'ed but LBL'ed which stands for Linus'es borg licence! L3 will contain the following features.

  • Costs $10,000
  • Very unstable
  • 1,398,484 known bugs that won't be fixed
  • Penguin DNA
  • 1000 propreity apps costing a further $23,300!
  • LinSPY
  • Anti Destructi!
  • AND ABSOLUTLEY NO WARRENTY!

Thats NOT ALL! It's will be illegal to copy it, current linux users must upgrade or have their computers destroyed!

Rob, Linuses little helper has wrote WinVi, MacVi OS/2Vi BeVi and solarVI! these 'VI's are viruses that are Undectectable and Unstoppable and will FORCE L3 on to everyones machine!

He has passed a special "LAW" to make this legal, and will be realesing evil L3 either when its June 2004 or when every penguin on the planet is massacered!

And if we take the $33,300 per copy cost and mutliply that by about 500,000,000 estimated computers in the world linus will make

$16,650,000,000,000

Making him a multi trillionare! and about 390 times richer than B.Gate$ ! Don't let this happen, boycott linux today!

Buy Windows XP for non evil computing!

Re:Linux is evil! (1)

maverick_and_goose (526330) | more than 12 years ago | (#2549406)

idiot you are an idiot but kinda funny i don't know what it has to do with apache though... so i label you a TROLL

Can threads really beat fork(2)? (3, Interesting)

imrdkl (302224) | more than 12 years ago | (#2548371)

I've always been a bit suspicious of threads, even the latest and greatest kernel threads. Is there someone who can speak to the wisdom and tradeoffs in doing this? I like my fu^Horking apache just the way it is. Programming threads is also hard. What about all of the cool API stuff and plugins, I suppose they all have to be rewritten? Mod_rewrite, mod_perl, etc, etc, yes?

Re:Can threads really beat fork(2)? (1)

Jeff Trawick (139236) | more than 12 years ago | (#2548428)

You don't have to use the threaded model with Apache 2.0 on Unix. There is a 1.3-style processing model available.

However, a module for Apache 2.0 probably would want to be thread-aware to avoid requiring that the admin use the 1.3-style processing model.

On some platforms threads won't beat fork for speed, but certainly the total virtual memory usage for a threaded Apache deployment should be less than for a non-threaded deployment on any platform. For most people this is a non-issue, but in some environments Apache 1.3 is a big problem because of the memory footprint required by the zillions of processes required.

Re:Can threads really beat fork(2)? (1)

schmaltz (70977) | more than 12 years ago | (#2548922)

On some platforms threads won't beat fork for speed

You care to substantiate this claim? fork() generally dupes the current process in memory -an expensive operation. Threads make no such operation, instead relying upon a simple, lightweight Thread object to manage execution, and in the case of servers and servlets, utilizing already-instantiated server objects to execute.

Re:Can threads really beat fork(2)? (1)

Nicopa (87617) | more than 12 years ago | (#2550226)

Yes, but forking isn't an issue because Apache pre-forks a number of "worker processes". So it should be true that a threaded Apache would give little advantage on many operating systems.

Re:Can threads really beat fork(2)? (1)

schmaltz (70977) | more than 12 years ago | (#2551123)

Yes, but forking isn't an issue because Apache pre-forks a number of "worker processes". So it should be true that a threaded Apache would give little advantage on many operating systems.

But forked or pre-forked, each process, which will handle only one "hit" at a time, has the same memory burden as a full apache process (coz that's what it is.)

Now compare this to the threaded version, where threads are objects, miniscule next to an Apache process, and where many of the other objects used by a thread are reused, not regenerated.

My experience in running Apache servers is that memory is consumed before bandwidth or processor... with threads it'll be cpu first, coz you'll be able to handle much higher number of concurrent requests.

The earlier point about thread-based Apache being more vulnerable to a process dying than process-based *is* true, so maybe a mix of processes and threads will give some margin towards failsafety. Don't run all server threads under just one process, have multiple processes, if that's possible.

mod_perl (2, Informative)

m_ilya (311437) | more than 12 years ago | (#2548436)

What about all of the cool API stuff and plugins, I suppose they all have to be rewritten? Mod_rewrite, mod_perl, etc, etc, yes?

AFAIK Apache's API have been changed and indeed all its modules should be rewritten for new Apache.

I don't know about all modules but here some info about mod_perl. There is already exist rewrite [apache.org] of mod_perl for Apache 2.0 with threads support. It has many tasty features. Check [apache.org] yourself.

Re:Can threads really beat fork(2)? (5, Insightful)

jilles (20976) | more than 12 years ago | (#2548437)

Programming threads is just as hard as programming with processes on a conceptual level. The type of problems you encounter are the same.

However, process handling is potentially more expensive since processes have separate address spaces and require special mechanisms for communication between these address spaces. From the point of view of system resources and scalability you are better of with threads than with processes. Typically the amount of threads an OS can handle is much larger than the amount of processess it can handle. With multi processor systems becoming more prevalent, multithreaded systems are required to be able to use all the processors effectively and distribute the load evenly.

The primary reasone why you would want to use processes anyway is stability. When the mother process holding a bunch of threads dies, all its threads die too. If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application. At the process level, you have the OS shielding each process' addressspace from the other processess so that gives you some level of protection against misbehaving processes. Running apache in multiple processes therefore gives you some protection, if one of the httpd processes dies, the other processes can take over and continue to handle requests.

The use of highlevel languages & APIs (e.g. Java and it's threading facilities) addresses these stability issues and makes it safer (not perfectly safe) to use threads. Java for instance offers memory management facilities that basically prevent such things as buffer overflows or illegal memory access. This largely removes the need for the kind of memory protection an OS offers for processes.

Apache 2.0 is specifically designed to be more scalable than the 1.3.x series. Threading is a key architectural change in this respect. Sadly it is not written in Java which unlike some people on slashdot believe is very capable of competing with lower level languages in this type of server applications. Presumably the apache developers are using a few well developed C APIs to provide some protection against stability issues.

Re:Can threads really beat fork(2)? (2, Funny)

schmaltz (70977) | more than 12 years ago | (#2548457)

try {
If your application consists of 1 process and 1000 threads, a single thread can bring down the entire application
}
catch (IllegalFUDOperation excep) {
Only if you're not on top of your exception handling!
}

Re:Can threads really beat fork(2)? (0)

Anonymous Coward | more than 12 years ago | (#2549226)

No, think JVM bugs. We hit them all the time, so we run two weblogic's on each machine in our cluster, using weblogic's clustering support. That way one crashing VM doesn't affect our throughput.

Re:Can threads really beat fork(2)? (2, Informative)

Furry Ice (136126) | more than 12 years ago | (#2548537)

Programming threads is just as hard as programming with processes on a conceptual level. The type of problems you encounter are the same.

This makes it sound as if the two models have equivalent obstacles, and that neither is easier than the other. It's true that separate processes are used for stability reasons, but that stability isn't gained only because one process can crash without taking all other processes with it. The main problem with threads that doesn't exist with processes is with shared memory. All variables on the heap can potentially be accessed by two threads at any given time, and access to them must be synchronized. Bugs related to these race conditions can be very hard to track down, and many people would rather forego the problem entirely and just use processes.

Re:Can threads really beat fork(2)? (3, Insightful)

jilles (20976) | more than 12 years ago | (#2548732)

Shared data is inevitable in distributed systems. If you isolate the data for each process using memory protection, that implies that there has to be some means of transferring data from one process to another (e.g. pipes). Typically such mechanisms are cumbersome and make context switches expensive.

My whole point is that with highlevel languages, such as Java, the language encapsulates most of the complexity of dealing with synchronization. Java does not have a process concept other than the (typically single) JVM process that hosts all the threads.

Strong typing, and OO further enhance the stability and consistency. Emulating such mechanisms in a language like C is hard and requires intimate knowledge of parallel programming and discipline of the programmers.

Therefore multithreading wasn't very popular until very recently. Only since the 2.2 and 2.4 linux kernels were introduced, threading has become somewhat feasible in terms of performance. Using the new threading features requires that you think beyond the heap as a central storage facility for data. In Java the heap is something that JVM uses to store and manage objects. At the programming level you only have objects. Objects are referred to by other objects (which may be threads) and may refer to/create objects themselves. Access to the data in the objects is done through access methods and where applicable you make those methods synchronized (i.e. you include the synhronized keyword in the method signature or employ a synchronized code block somewhere) to ensure no other objects interfere.

Each time you employ (or should employ) a synchronization mechanism, you would have had to employ a similar mechanism if you had been using processes. The only problem is that that mechanism would probably be much more expensive to use since you are accessing data across address space boundaries.

With this in mind, the use of processes is limited to situations where there is little or no communication between the processes. Implementing such software using threads should be dead simple since you will only have a few situations where the threads are accessing each others data so there is no real risk for race conditions. Such situations you can deal with using well designed APIs and by preventing dirty pointer arithmetic. A company I have worked with who write large embedded software systems for an OS without memory protection on processes has successfully built a rock solid system this way in C++. By their own account they have actually encountered very few race conditions in their system. My guess is that the apache people have employed similar techniques and code guidelines to avoid the kind of bugs you are talking about.

So if you are encountering race conditions in your code, using processes rather than threads won't solve your problems because you still need to synchronize data. You can do so more cheaply with threads than with processes.

Re:Can threads really beat fork(2)? (1)

Furry Ice (136126) | more than 12 years ago | (#2548794)

You're still glossing over things. When using threads, *anything* on the heap can potentially be accessed by two threads simulataneously. If you're using processes, you know exactly when and where data is being shared (it's kind of hard to miss data moving through a pipe). It's much easier to control, but it does come at the expense of efficiency. The only really efficient IPC mechanism is shared memory, which of course has the exact same problems as multi-threaded code.

Threads do have their place--whenever you need concurrency and a large amount of data needs to be shared, go with them. But saying that you should use them when you have largely independent tasks which don't share data is silly. That's exactly what processes are for, and you eliminate any risk of threads stomping on each other. If you need to have thousands of them, maybe you should look into threads, but it would probably be best to check your algorithm. Any time you think you need huge numbers of processes or threads, you'd best think again. Context switches are going to kill whether you're using threads or processes.

Re:Can threads really beat fork(2)? (2)

jilles (20976) | more than 12 years ago | (#2548869)

You can only access anything on the heap if your programming language allows it (e.g. C or C++) in which case you need to constrain how the language is used. I've seen quite a few companies who work with C/C++ employ strict coding guidelines that accomplish this.

If you have a lot of independent tasks which don't share data. You use threads because that will give you a more scalable system. Of course your system will be riddled with bugs if you start doing all sorts of pointer arithmetic which, in general, is a bad idea (even on non distributed systems). If two threads are accessing the same data they are sharing it. If they shouldn't, its a bug. The only reason processes are useful is that they force you to employ methods other than pointers to access shared data (so if you create a bug by doing funky pointer arithmetic it will only affect one process).

Multi threaded applications are known to scale to several thousands of threads on relatively modest hardware. Context switches typically occur when different threads/processes on the same processor are accessing different data. Context switching for processes is more expensive then for threads on modern operating systems.

You are calling me silly for recommending threads as good alternative for processes in situations that require scalability. Yet, IMHO, this is exactly the reason why apache 2.0 is using threads.

Re:Can threads really beat fork(2)? (1)

Furry Ice (136126) | more than 12 years ago | (#2549330)

Have you seen thousands of threads running on one of Linus' kernels? He isn't particularly fond of the idea, and isn't going to tweak the kernel to support that sort of thing. There's a reason Linux native threads JVMs do poorly on the scalability portion of VolanoMark...

Everything you're saying makes sense on a system where processes really are heavyweight monsters. On Linux, processes and threads are much more similar. The difference is copy-on-write semantics for memory pages. Up until you actually modify a page, it is shared by child and parent. This means that using processes instead of threads doesn't automatically mean that you're grossly increasing memory needs.

"Sadly Apache is not written in Java" ??? (0)

Anonymous Coward | more than 12 years ago | (#2549443)

Oh come on, Java is not a systems or a platform language, it is a application language. Apache is not an application it is a platform. Writing something like this in Java would be absurd, the performance would be horrible.

Re:"Sadly Apache is not written in Java" ??? (2)

jilles (20976) | more than 12 years ago | (#2550453)

Java is often referred to as a platform as well. There are in fact web servers written in Java and the performance is not horrible. With the upcoming jdk, new more efficient IO will be possible so there will be even better performing web servers available.

And then of course there are servlets and servlet engines which are used to run complex, large websites. So it is possible.

Ahahhahahaha (0)

Anonymous Coward | more than 12 years ago | (#2548521)

That bullshit got modded up? Jesus fuck you pudwhomper... of course threads are better than new processes. IPC is _expensive_ and unwieldy; threads are fast and to the metal, minus the overhead incurred by, you know. Spawning a new process and eating up that additional memory/CPU time rather than just having a process spawn a thread.
Dickbag.

Re:Can threads really beat fork(2)? (1)

barries (15577) | more than 12 years ago | (#2548647)

If you like your prefork server, just build Apache 2.0 with the "prefork" MPM. Some plaftorms are not supported by it, there's a MPM specific to Win32 for instance.

Threads programming is made hard when you are communicating between the threads or when a thread goes haywire and overwrites another threads' memory regions. The former is not a large issue for most C or (especially) mod_perl Apache modules, since they don't try to share state. These should port rather easily to a multithreaded environment.

The real issue is for C modules that get a little funky with the 1.3 (or older) API: there's a *lot* new under the hood in Apache 2.0 and such modules may require a complete rewrite. Many will only require minor rewrites, though complete rewrites to leverage Apache 2.0's input and output filters will be quite beneficial. Imagine writing a filter module that can alter content retrieved by the new mod_proxy, and optionally cached locally before or after the filter alters it :).

Debugging is often more difficult with threads, but there are command line options to make it easier to debug, and there's always compiling it with the prefork MPM.

Yes, many modules and C libraries are not thread safe; this will be a source of painful problems for advanced modules for years to come. But most modules should port relatively painlessly, and many people don't go farther than those modules that ship with Apache; those, of course, are already being ported and debugged.

The prefork MPM is likely to be more safe in the face of memory bugs and deadlock issues due to the isolation imposed by the OS between processes, but is likely to be slower than the threaded MPMs on many platforms.

FWIW, mod_perl 2.0 is shaping up very nicely and perl5 seems to be resolving most of the major obstacles to full, safe multithreading in a manner that will prevent unexpected variable sharing problems (all variables are thread-local unless specified otherwise). mod_perl 2.0 boots and runs multithreaded now, and as soon as the core Perl interpreter becomes more threadsafe, it should be ready for trial use.

At least one mod_perl production site has been tested on mod_perl 2.0 (though not in production :). mod_perl 2.0 has a compatability layer that will help existing modules run with little or no modification.

Life's looking good for Apache 2.0 and mod_perl 2.0.

Re:Can threads really beat fork(2)? (2)

KlomDark (6370) | more than 12 years ago | (#2554831)

I talked with Dirk Willem van Gulig a few days ago. The way he explained the use of both the two models available in Apache 2.0 was to run the apps you trust not to crash in the Threaded model. Apps that you may be having problems with, or really important, high-usage apps in the Process model.

As far as rewritten modules, some of them will need to be, as modules now will need to be able to be also used as filters. With Apache 2.0, it's possible to use the output of one module as the input to another module. Such as running the output from mod_php through mod_include and then through mod_rewrite. Really cool stuff!

The major modules have already been rewritten. The API is changed as well, to give it more power, such as a filename to filename hook. (Finally!)

I beleive he said something about the capability of 1.3 modules to still be used, but only in the old way, not as filters. But I am not completely sure that is what he said. (He talks insanely fast! Even sitting next to him I sometimes had trouble keeping up with with his accent. Not his fault, I just haven't talked to a lot of people from the Netherlands, so I'm not used to it.)

BSD License (0)

Anonymous Coward | more than 12 years ago | (#2548431)

As I see opensource move along I think it will become a thing of the past as long as people use BSD License. Come on ALL, Lets code and let a company close are work and improve on it, buy not give it back. It is such a stupid license why would anyone use it.

Re:BSD License (0)

Anonymous Coward | more than 12 years ago | (#2548440)

I agree the GNU license is the only good one. Apache uses it own license, but I think it still allows you to keep it closed. It is close to BSD's license.

Socialism doesn't work! (0)

Anonymous Coward | more than 12 years ago | (#2548500)

When you warez d00ds grow up you will realize that giving away everything for free is not a viable economic model (examine the success of "free software companies" for evidence). The BSD license allows commercial vendors to sell priopritary add-ons and enhanced versions of the software, while giving away the basic version as open source. This combines the advantages of open source with the ability of getting paid. It's the best of both worlds.

Re:Socialism doesn't work! (1)

ab315 (443209) | more than 12 years ago | (#2549251)

BSD licensed projects rely on the goodwill of people contributing patches back to the free version, rather than profiting themselves by keeping their changes proprietary. That is more naive than the GPL. In fact it is contrary to economic logic.

It's naive open-source zealots who say that giving away software is supposed to be profitable. I agree that these people are insane. Obviously it's not profitable, but that doesn't mean free software is any more non-viable than ,say, public radio for example.

Re:BSD License (1)

ab315 (443209) | more than 12 years ago | (#2548677)

How long till we see NetApache, OpenApache, and FreeApache?

Strange wording? (2)

BoarderPhreak (234086) | more than 12 years ago | (#2548435)

"It's not clear when the Open Source Edition (or whatever) will come out..."


Is it just me, or does this "or whatever" kind of attitude strike you as strange? Granted, Apache has been seriously draggin' ass on 2.0 and I can see folks getting a little anxious to have it out already...

Re:Strange wording? (2, Insightful)

Anonymous Coward | more than 12 years ago | (#2548553)

Its not going to happen. Look at Ken Coar's editorial in the last Apache Week. The ASF is spinning their wheels at this point. One person will go in to fix a single bug and instead rewrite the entire system (for instance the url parser). They fix one bug but create several more. They have no concept of a code freeze.
The 1.3 tree is getting very long in the tooth and patches are pretty much rejected becase "work is now in the 2.0 tree". The way that the ASF is playing it, they will cause the Open Source community to loose the web server biz.
The silly politics alone that keep SSL, EAPI and three different methods of compiling Apache are enough to make sure it is doomed. Why has IIS taken over the SSL market? Because it ships with EAPI.
Its really sad.

-1 FUD (2, Informative)

jslag (21657) | more than 12 years ago | (#2548812)

Look at Ken Coar's editorial in the last Apache Week. The ASF is spinning their wheels at this point.


The article [apacheweek.com]
in question says nothing of the sort. It notes that the development processes of apache have changed over the years, with associated wins and losses.


Why has IIS taken over the SSL market? Because it ships with EAPI.


Thanks for the laugh.

Apache 2.0 is *not* out on Monday (4, Interesting)

markcox (236503) | more than 12 years ago | (#2548472)

Although the CNet article tells you otherwise, the open source verison of Apache 2.0 is not available on Monday, and as stated in Apache Week, is only just becoming stable enough for another beta release. Covalent are launching a commercial product that is based on Apache 2.0 but with proprietary extensions (the Apache license unlike the GPL allows this). IBM's httpd server has been based on a 2.0 beta for a number of months. Since Covalent say they've made it Enterprise Ready they must have cured the performance and stability problems, when these get contributed back to the main Apache 2.0 tree everyone wins.

Mark Cox, Red Hat

MOD PARENT DOWN! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2548527)

He's a known troll and is nothing but a muckracker! His "website" is a goatse.cx link! Watch out!

Ya, *when* is the question (0)

Anonymous Coward | more than 12 years ago | (#2549477)

Because of another BSD-style licensed program, a company has once again taken the work of many, improved it, and not given back.

Oh, I'm sure they'll give back "at their convenience."

Time to cut the bullshit and use the GPL.

Powered by NSPR! (1)

Simm0 (236060) | more than 12 years ago | (#2548481)

I've read somewhere that Apache 2.0 is using the underlying code to mozilla, nspr (netscape portable runtime) for all the core stuff such as threading and memory allocation. It's good to see that an app like mozilla can be really usefull to other open source applications such as apache.

Re:Powered by NSPR! (1)

slive (21582) | more than 12 years ago | (#2548565)

Wrong. Apache uses APR, not NSPR:
http://apr.apache.org/ [apache.org]

Re:Powered by NSPR! (0)

Anonymous Coward | more than 12 years ago | (#2548857)

actually if i remember correctly, there is one platform where the apr has been switched out in favor of the nspr because it offers higher performance on that platform (IRIX). both the nspr, and apr compete in the same space of providing cross-platform (xp:) libraries for c/c++ programs that programmers of higher level languages take for granted.

Is Apache 2.0 ready ?? (2)

green pizza (159161) | more than 12 years ago | (#2548760)

I fully realize that this is talking about Covalent's Apache-based software, but I'm still wondering how ready the Apache 2.0 codebase is... I've been playing with 2.0.16 beta for awhile now on one of my test servers without and problems, but that doesn't mean diddly. I'm looking foward to verison 2.0, but not without extensive testing. Version 1.3.22 works way too well for me to make a switch anytime soon.

Re:Is Apache 2.0 ready ?? (3, Insightful)

Jerenk (10262) | more than 12 years ago | (#2549254)

At this point, I would judge the current httpd-2.0 codebase as beta-quality. There have been lots of improvements made to the Apache 2.0 codebase since 2.0.16 was released - I would expect that we have a much better codebase now than was in 2.0.16. I would expect you to have an even better experience with our next release whenever it occurs (or you may use CVS to obtain the up-to-the-minute version!).

Yes, we're way overdue releasing Apache 2.0 as a GA (we started thinking about 2.x in 1997), but that is a testament to our quality - we will NOT release Apache 2.0 as a general availability release until we are all satisfied that it meets our expectations. "It's ready when it's ready."

We have a very good stable product in Apache 1.3. We must match the quality expectations we've set for ourselves in the past. And, almost everyone in the group is keenly aware of that.

Sounds to me.. (1)

GISboy (533907) | more than 12 years ago | (#2548887)

as if Covalent trying to put a 'feather in its cap'.

(security through obscurity does not work, so I'm trying humor thru obsucrity.)

I'll admit, I'm not versed in marketiod speak but this caught my attention:
Covalent has taken a great web server -- Apache -- and added key functionality that enhances enterprise customers' experience."

What this say to me is "Apache kicks ass, now any idio^H^H^H^enterprise customer can use it with our new point and click gui!"

(shaking head)

A few minutes on freshmeat.net, dudes, would probably solve most of your problems if you are looking for a gui to configure this stuff.

If that is not the case, well, my programming days are over and the comments on the trade offs with what Covalent is doing just leave me to hope it does not reflect badly on Apache.

does anyone know if the newest beta of apache2 (0)

Anonymous Coward | more than 12 years ago | (#2549058)

will thread on bsd? cause right now it doesn't
and well thats just useless to me. i hate prefork.
prefork+380requests a second = *yuck*

Get used to it (0)

Anonymous Coward | more than 12 years ago | (#2549482)

BSD sucks.

Apparently everyone knows this but you.

Re:does anyone know if the newest beta of apache2 (2, Informative)

Jeff Trawick (139236) | more than 12 years ago | (#2549816)

painful question :( various folks have tried running threaded on FreeBSD... we got weird symptoms that nobody has had time/skills to track down... sadly, I think a FreeBSD pthreads guru is going to need to build a threaded Apache 2.0 and debug it... maybe we're just tickling FreeBSD pthreads the wrong way and a small change would take care of it... maybe current FreeBSD pthreads just can't support threaded Apache... I dunno

even prefork (non-threaded) MPM with a thread-safe APR doesn't work right on FreeBSD... if I recall correctly, the parent process was eating lots of CPU in some sort of signal code...

crippled free versions -- Covalent and VA (2, Insightful)

tim_maroney (239442) | more than 12 years ago | (#2549177)

The release announcement by Covalent on top of this week's announcement of a proprietary version of SourceForge by VA [2001-11-06 20:04:54 VA Embraces Closed Source (articles,va) (rejected)] should have us all wondering where things are heading during this period of revision for open source business models. Are we headed for a world where ostensibly free programs are deliberately crippled relative to proprietary versions of the same code?

Covalent funds a great deal of Apache development directly, as well as contributing board members and other members to the Apache Software Foundation. It's clearly not doing this primarily to help the open source version of Apache along, but to advance its own proprietary version of Apache. Eventually Apache 2.0 may come out in an open source version, but it doesn't seem to be a priority of the main contributor to Apache to make that happen. A conspiracy-theory approach might even suggest that they are deliberately applying a flawed, destabilizing model to the open source tree (commit then review, no feature freeze) while presumably they use a tighter and more controlled process to get the proprietary version out.

People have suggested that the internal versions of GNAT distributed in a semi-proprietary way by ACT may be better than the open source versions, while ACT says the opposite -- that their private versions are less tested, require technical support, and would only hinder those who don't have support contracts. I don't know the truth of the matter there, and this is not meant to point the finger at ACT, but this phased-release strategy by Covalent raises some of the same questions.

VA's proprietary SourceForge conjures a similar spectre. There will still be a free SourceForge, but improvements are going primarily into the proprietary version. Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.

Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases? And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?

Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system. This week's events involving VA and Covalent show that this may be becoming a trend with significant impact on the whole open source and free software movement.

Tim

Re:crippled free versions -- Covalent and VA (2)

Micah (278) | more than 12 years ago | (#2549385)

Perhaps outside engineers will start playing catch-up and adding clones of the proprietary features to an open source branch of SourceForge, but at best the open source version will still lag behind, and it may happen that it will always be so far behind as to be relatively crippled compared with the proprietary version.

I think that's far from certain. One of the premises of the BSD license is that even if someone does take the code and release a proprietary fork, the Open Source model has enough advantages that the community should be able to keep up and even surpass them.

That seems likely to happen at some point.

Re:crippled free versions -- Covalent and VA (2)

tim_maroney (239442) | more than 12 years ago | (#2549430)

the Open Source model has enough advantages that the community should be able to keep up and even surpass them.

I don't think there's any historical evidence for the popular idea that open source software improves faster than proprietary software. As this post [slashdot.org] from an IBM open source developer points out, there are serious management overheads and inefficiencies associated with the model.

One of the advantages of being closed is control. You get to choose exactly where each programmer works; you get to choose exactly which pieces of the system change, and which don't. When you open it, suddenly, you lose control. You can't just make decisions anymore; you need to work with your contributor base, which is a much slower process than managerial decree. And you need to deal with the fact that people will be changing things all over the place, and be capable of integrating those changes into your own ongoing work. That costs time(possibly a lot of time), and time costs money.

If managing engineers under normal conditions is like herding cats, open source development is like harnessing a swarm of bees.

Tim

Re:crippled free versions -- Covalent and VA (2)

Jerenk (10262) | more than 12 years ago | (#2549553)

Is open source heading toward a model where some of its dominant programs are available for free only in crippled versions lagging behind the proprietary releases?

I doubt that. As an active Apache developer who doesn't really have any ties to a company with a vested interest in Apache, I work with the Covalent people every day. And, I doubt that the open-source version of Apache HTTPD will lag behind any version that Covalent or IBM has. In fact, I bet that the version that Covalent will release on Monday will include some bugs that have already been fixed in the open-source version.

Where I think companies like Covalent come in is to support corporations that *require* support. Their price ($1495/CPU or something like that) isn't targeted towards people who would be interested in the open-source version, but for corporations that can't ever afford to have their web server go down.

Covalent also offers some freebies (such as mod_ftp). I think under Apache 2.0, it is sufficiently easy for someone to come in and write a module that handles FTP. It's just that no one has had the inclination to write one. And, I bet if someone did, it just might eventually be better than the one Covalent wrote.

VA is a little different from Covalent as, IIRC, they are the sole owners of Sourceforge, but Covalent is just a part of the Apache community (an active one though).

And if so, what does that say about unpaid volunteer contributions? Are they really for the public benefit, or for the benefit of a proprietary developer? If the latter, why volunteer?

I work on what I want to work on. People who work at Covalent have a "direction" on things to work on. As an unpaid volunteer, I get to work on whatever I feel like at the moment. I'll take that any day of the week. But, there is a definite value to getting paid to work solely on Apache.

Other problems with crippled free versions have been noted here before, such as having to pay for documentation on ostensibly free software, or needing a proprietary installer to effectively install a supposedly free system.

FWIW, I believe this is definitely not the case with Apache. The docs are freely available and the Win32 installer is one donated by IBM (I think, I forget - someone donated it).

Re:crippled free versions -- Covalent and VA (0)

Anonymous Coward | more than 12 years ago | (#2549578)

The win32 for Apache 2.0 installer was in fact, donated by Covalent (will rowe)

Covalent sucks with UCE! (1)

no-body (127863) | more than 12 years ago | (#2549409)

They (or somebody they bought from) harvested my email address from network solutions database and blessed me with UCE yesterday:

Subject: Buy Covalent's Apache Web Server and Get a FREE Entrust Certificate

I can tell because I use unique email addresses for everyone.

Re:Covalent sucks with UCE! (0)

Anonymous Coward | more than 12 years ago | (#2549575)

Not at all. I work for Covalent and can guarantee you that has never happened. You need ot voluntarily accept being in that list. You are usually asked when you buy a product from us or assist one of the Apache 2.0 seminars.
If your mail is there for any kind of weird error (unlikely, but possible), you can always unsubscribe

Re:Covalent sucks with UCE! (1)

willdye (109847) | more than 12 years ago | (#2553714)

If you get an unsolicited e-mail in Covalent's name, write directly to company and tell them about it. I know a couple of the guys who work there, and I'm confident that they didn't move halfway across the country just to join the spamming industry. Maybe the list got polluted with your name somehow, or maybe they farmed out the PR stuff to another company. Either way, just give them a chance to fix it.

--Will

"drops" monday? (1)

lbergstr (55751) | more than 12 years ago | (#2549784)

dude, it's not an album. stop pretending you're not a geek.

This is why I only work on GPL'd projects (0)

Anonymous Coward | more than 12 years ago | (#2549867)

In practice, I try not to contribute to projects that don't protect against this sort of wresting of control and energy away from free/open projects.

Here we have a *possible* case of proprietary interests appropriating resources towards a proprietary end (to the detriment of the original free/open project) by highering developers away from Apache (BTW, good for them, not complaining that they are being paid) to work on proprietary extensions.

The likelihood and risk of this ever occuring is greatly reduced when a GPL style license is used.

So what do you think? What would the downsides be to Apache having originally been licensed under the GPL? As far as I can see, there wouldn't be any. So what, we wouldn't have the closed-source IBM httpd packaging, and a few other niche derivates. BFD.

Thoughts? (Please, not a troll-ing here, no license flamewars, let's talk about specifics: GPL'd Apache)
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?