Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

TurboLinux Releases "Potentially Dangerous" Clustering Software?

Hemos posted more than 14 years ago | from the new-directions-new-trials dept.

Linux 233

relaye writes "The performance clustering software and services announced today by Linux vendor TurboLinux Inc. and a cabal of partners including Unix vendor SCO Inc. takes the Linux market in an unusual and somewhat risky direction, analysts are saying. " The article cites risks of forking the kernel - not an incredibly probable risk, but a thought-provoking scenario. The danger comes if Linus decides not to incorporates TurboLinux's changes into the kernel.

cancel ×

233 comments

Sorry! There are no comments related to the filter you selected.

Well... (1)

JeffI (87909) | more than 14 years ago | (#1583227)

Well it sounds like they should have checked with him first now...eh? Well, regardless more good steps in making this powerful OS even more powerful with clustering. Good Job.

A Problem, Really? (2)

BootHead (41384) | more than 14 years ago | (#1583228)

Does anyone really see this as a threat? Why wouldn't Linus add this? I guess the media is just trying a dose of FUD on us.

And this is different from Redhat how???? (2)

maynard (3337) | more than 14 years ago | (#1583229)

Maybe these guys can explain to me how the inclusion of Pacific TurboLinux's unblessed kernel patches to support clustering is any different from the non-standard kernel that ships with Redhat.

Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the heavily patched kernel source that ships with Redhat Linux.

Fork? Big deal (5)

PD (9577) | more than 14 years ago | (#1583230)

The analysts are getting too jumpy over nothing. TurboLinux has the right to make whatever changes they want to. That's the *purpose* of open source. If Linus was concerned about a code fork, then logically he would have chosen a different licence.

We should all be pleased that Linux is so flexible technically and legally that anyone who has a problem can either use Linux to solve the problem, or change Linux to solve the problem.

Using a feature of the operating system like the open source licence is no different than using any other feature of the operating system, like support for a TV Tuner card. The users will use any features of the operating system in the way that they want to, and nobody can tell them they can't.

Turbo Linux isn't forking the code, they are using one of the most powerful features of the code.

And that's my view.

Maintaining patches (2)

MikeBabcock (65886) | more than 14 years ago | (#1583231)

I just rifled off an E-mail to turbolinux yesterday re: maintenance of their patches. I was wondering if they would receive approval for their patches from Linus and if not, if they would continue to maintain their patches as patches against the kernel, not as complete kernel releases. In this way, some, if not all of their patches can be incorporated, and others can be downloaded and applied as necessary by those who want them (such as the secure linux patches at kerneli.org).

- Michael T. Babcock <homepage [linuxsupportline.com] >

This can't be good (0)

Mr Donkey (83304) | more than 14 years ago | (#1583232)

This can be no good Linux Fragmentation, first we heard rumors and predictions, now we see the first hints of it. The way linux works, anybody can do anything with it, so TurboLinux has the right to do whatever they want with it. The danger is in fragmentation. I think the best way to handle this is for Linus and the head kernel hackers to sit down with peoples at TurboLinux and try to come up with a solution. Turbolinux should at least give the hackers time to look at their code and evaluate it. The clustering code will probably get better with a team of hackers worldwide looking at it!!!

Linus forking the kernel? (2)

Anonymous Coward | more than 14 years ago | (#1583233)

So the danger comes if Linus decides to Fork the kernel (by refusing to incorporate changes already adopted by various vendors.) It isn't inevitable that Linus will always lead the kernel releases. Maybe recognizing that is part of the evolution of Linux.

Re:A Problem, Really? (1)

Chuck Milam (1998) | more than 14 years ago | (#1583234)

Disclaimer: I'm not really familiar with the Turbo Linux Clustering Technology.
That being said, I wonder how useful the changes to the Linux kernel will be if the other tools to manage/configure/use the clustering technology are not available to the masses. An Analogy: A CD without a drive is just a shiny coaster.

printf("%d\n", fork()) prints -1 (3)

MAXOMENOS (9802) | more than 14 years ago | (#1583235)

Speaking frankly, I think the fears of code forking are unfounded. Linux is very good for high-performance clustering, but here at the Linux General Store, getting high-availability clustering has been a pain in the rear. TurboLinux's kernel patches to support high-availability clustering are an easy win for Linux, and a no-brainer for Linus. TurboLinux did the Linux community a great service by adding these patches (IMO).

How is this dangerous? (1)

ptomblin (1378) | more than 14 years ago | (#1583236)

Even if it doesn't make it into the main kernel, it's open source, it's supported by a vendor, so what's the problem? Every time a new "main branch" kernel comes out, the TurboLinux people can make their same changes to it that they did to previous versions. And if the code they're modifying to do it doesn't change much between kernel versions, it will be trivial for them. If somebody rips out and re-writes the stuff they're based on, then they have a problem - but anybody who cares about clustering in the open source community will be able to help them.

Could be good *or* bad (4)

Pascal Q. Porcupine (4467) | more than 14 years ago | (#1583237)

Before everyone starts to scream bloody murder about how this will fragment the Linux community, keep in mind that it wouldn't be in TurboLinux's best interests to fork it in incompatible ways. They only are keeping the possibility open if Linus doesn't accept the changes, and even then it'd be stupid of them not to keep adding their changes to the main kernel source. Everyone can still win in this situation.

Unfortunately, one of the parties that can win is the Microsoft PR department, who has been shouting FUD about the fragmentation of Linux for quite some time. So, hopefully a kernel fork won't be necessary, since even if the fork doesn't cause the problems of fragmentation, MS will still love the opportunity to claim that it's fragmentation whether it's a bad thing or not.

Personally, I'm all for kernel forking. It's not like 8086 Linux or RTLinux are currently part of the main kernel distribution, nor should they be. They fill in special needs, rather than being something good for everyone. A clustering-optimized kernel would be similar, IMO. Clustered systems tend to be homogenous and not have any exotic hardware to support (with the exception of gigabit network cards, which are generally supported just fine by the main kernel as it is). It's a special-need kernel, not something for general consumption. As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)
---
"'Is not a quine' is not a quine" is a quine.

Re:A Problem, Really? (2)

artg (24127) | more than 14 years ago | (#1583238)

It might not be added if Linus didn't like the implementation (unlikely, given the backing, but it might happen sometime).

But would that be a problem ? I don't think so - it would just mean that Turbo customers wanting those modifications wouldn't be able to use the latest stock kernel. That's their choice - it doesn't cause anybody else a problem unless large numbers of closed-source application developers start producing apps that ONLY run on the modified kernel.

Seems to me Redhat already does this with their nonstandard module-info thing .. it might be easy to get around, but it does mean that the kernel releases don't plug in and go.


RedHat/SCO (4)

Erich (151) | more than 14 years ago | (#1583239)

SCO CEO Doug Michels was deeply critical of Linux: "Companies like Red Hat ... take Linux technology with a lot less value added, and they package it up and say, `Hey, this is better than SCO.' Well, it isn't. And very few customers are buying that story."

Hmm... I've used SCO before...

I think that for most people SCO is inferior to Red Hat. Look at how much extra stuff Red Hat puts into their product, and how well it works with other stuff... Red Hat also does an amazing job of detecting hardware nowadays.

Not to say that SCO doesn't have lots of interesting things in it... there are some very nifty security model aspects that SCO has, for instance. But for people who want a web server or an smtp/pop server or a workstation, for cheap, with lots of power, I think that Red Hat provides a better solution. And I think that many customers are realizing that.

Not to mention a cooler name. :-)

Excessive Credit? (4)

Effugas (2378) | more than 14 years ago | (#1583240)

TurboLinux is making alot of noise regarding the work they've done, meanwhile aren't they just taking an existing (very impressive) kernel patch referencing Virtual Servers and claiming it as their own?

There's an aspect of dirty PR pool going on here.

Gotta love, incidentally, more Linux bashing by SCO. Their hatred is so tangible. Then again, at least they're honest.

Overall, I hope Linus doesn't feel pressured to incorporate a technically inferior solution because somebody is attempting an ad hoc kernel power grab. We don't want people saying to Linus, "You're going to put this into the kernel because we've made it the standard." Embrace and Extend indeed.

That being said, I've heard very good things about the patch TurboLinux has appropriated without due credit. I've also heard some insanely interesting things about MOSIX, the virtual server project started in Israel and made GPL around six or eight months ago. Mosix is immensely interesting mainly because of its ability for seamless and invisible process migration--all processes, not just those written via PVM/MPI, get automagically clustered.

Very, very cool.

Comments from people more knowledgable than I about the details glossed over in this would be most appreciated.

Yours Truly,

Dan Kaminsky
DoxPara Research
http://www.doxpara.com

A few thoughts (0)

Anonymous Coward | more than 14 years ago | (#1583241)

This will probably be already said by the time this is posted, but I really have a problem with a major linux vendor producing closed source software. I am not against binary only products for linux per say, but these should be third party in my opinion. What they should do is form a seperate division that sells addon linux software that is closed source which any linux company can then use and resell. For example, I see nothing wrong with Redhat selling Motif along with their product. Just my 2 cents.

Fork fork fork (0)

Anonymous Coward | more than 14 years ago | (#1583242)

First, this is not a 'danger' to the linux community. I will continue to support OSS whenever possible, and more importantly, I will continue to use the tools that work for me.

If they want to start pushing their own linux kernel, as opposed to just providnig patches to the current kernels, that's fine. They won't get the benefit of all the new services we add to the linux kernel.

If it is too difficult to provide patches against that standard kernel, then perhaps it should be a separate branch, if some of the core architecture changes.

I must say, however, that if someone is working on clustering/high availability software for Linux and is willing to GPL it (which they must be, if they want to distribute), then we should ABSOLUTELY support it!

Short and sweet.. (1)

Kitsune Sushi (87987) | more than 14 years ago | (#1583243)

1) Will media types and pseudo media types ever understand the differences between ``public domain'' and ``copylefted software''? This is, of course, unless I'm just kidding myself into thinking that the GNU General Public License is, well, a license.

2) Doug Michels.. is a gimp? If you think that is some sort of unfounded flame, you should really consider reading that interview. He's a complete and utter tool. Few people are that lost.. hee hee hee.. He must be living on a totally different fscking planet. =P

TurboLinux extensions (3)

Fnkmaster (89084) | more than 14 years ago | (#1583244)

Well, I am not entirely sure of what sort of changes to the kernel the Turbo Linux folk have made, but unless they provide some actual functionality to non-Turbo Linux clustering users, they *shouldn't* be incorporated into the main kernel fork.

This article clearly states that Turbo Linux plans to keep some chunk of their clustering technology proprietary (presumably all parts of it that operate in userland). If they don't plan on making their HA clustering work for the rest of us in any way, why should the kernel maintainers add support for their HA clustering, unless it somehow is part of an open standard?

I have no big moral problem with Turbo Linux choosing to fork the kernel. It'll be their problem if they introduce compatibility issues. People simply won't use Turbo Linux. The right to fork is an integral part of the GPL. Let the market (i.e. user choice) decide. If the features are useful, people will want them, and they will make their way into the mainline kernel. If they aren't useful to us, they won't, and TurboLinux will just have to patch every kernel release (frankly I don't care if they do, as long as they are abiding by the terms of the GPL).

What are they changing? (1)

Bwah (3970) | more than 14 years ago | (#1583245)

Does anybody know what they are changing?
(another flavor of kernel support for cluster wide resoures/pids/etc. or something?)

The article basically said nothing about what their system does to make it better/different than any existing clustering setup.

dv

Fork, schmork... (2)

John Fulmer (5840) | more than 14 years ago | (#1583246)

Since *BSD forked, Linux HAS to, right? Since 'free' entities can't share a common vision? Bah!

At best, the code is good and Linus incorporates it as a kernel option.

At worst, it's a patch for a very specialized function, examples of which already exist:

Embedded Linux
uLinux
RT Linux
e2compr (compression for the ext2fs filesystem)

I don't see something as specialized as server clustering forcing an actual 'fork' of the Linux kernel, except as a vertical application (Like Embedded Linux).

jf

Re:Well... (1)

Anonymous Coward | more than 14 years ago | (#1583247)

Well it sounds like they should have checked with him first now...eh?

Yes, you're right. Linus has the final say in Linux anything. So they should have checked with him first. Of course, this points out even more than before how precarious the whole Linux project is. How close to forking it is, and how much more likely the fork will become with time (and with more commercial use of Linux).

Why incorporate changes??? (1)

smackdaddy (4761) | more than 14 years ago | (#1583248)

Is there really any reason the changes would become a part of the main kernel if the rest of it is closed. I think part of the problem is that if it changes things in ways that slowdown a nonclustered kernel it won't make it, and if it doesn't should Linus incorporate the changes when the Userspace clustering software won't be open, so it won't be able to be used by the free software community as a whole?

Re:This can't be good (2)

Pascal Q. Porcupine (4467) | more than 14 years ago | (#1583249)

But we already HAVE that kind of fragmentation in 8086 Linux and RTLinux and likely a dozen other special-interest kernel forks, and so far I haven't seen any collapse of the Linux world because of that...
---
"'Is not a quine' is not a quine" is a quine.

The Spirit of Linux (2)

The Welcome Rain (31576) | more than 14 years ago | (#1583250)

I'm surprised this is even an issue. Linux isn't NetBSD, with tight oversight and cathedral-like concentration on purity. Thisis Linux -- people are supposed to be able to contribute freely.

This isn't to say that all submitted diffs should be merged immediately, but why give up one of Linux's great strengths -- the ease of contribution.

--

Re:And this is different from Redhat how???? (1)

Thomas Charron (1485) | more than 14 years ago | (#1583251)

How does RedHat ship with a non standard Kernel?

Simply becouse they presetup the source files to make everything as modules?

The RedHat kernel is NOT heavily patched, nor has it ever been..

good direction (1)

dciman (106457) | more than 14 years ago | (#1583252)

I think this is an outstanding step for the market. Any new development that add functionality, to Linux is, in my opnion, a step in the right direction. I also tend to believe that if this all works out and is becomes an in demand feature that Linus will incorporate clustering into his kernel. After all, everyone working together to make better software is what Linux is all about.

What changes need to be made? (2)

Raven667 (14867) | more than 14 years ago | (#1583253)

The question of weather Linus will accept these kernel patches is a matter of what is being changed. If they are architecturally sound and take the kernel in a direction that Linus wants it to go they will be incorporated, if they are just some glue for proprietary stuff that TurboLinux sells then they don't have a chance in hell.

The other question is -- could they, or would they, fork the kernel if TurboLinux doesn't get their way. The other solution is to either make due without their enhancements or port their patches to each kernel version. The second option is not too far away from what other vendors do in backporting security updates to the old, shipping, version of thier kernel (COL w/2.2.10 has patches from 2.2.12/13 in a 2.2.10 update RPM). There are also other distros that add beta or unsupported patches, like devfs (Correct me if I am wrong on this point, I don't have this personally).

What does the GPL allow? They don't own Linux, no one does, what would they be able to accomplish (barring Linus from accepting their patches)without the support of the core developers.

I guess that I have more questions than answers, GPLd software hasn't been as popular as recently and some of these issues are being tested on a large scale, for the first time. Or maybe not, the GPL has been around for many years. Maybe this kind of thing has happened before and we can just look back and learn from experience. If anyone can point out an instance I would appreciate it greatly.

Enough rambling for one post.

I'm not sure I see a problem here (3)

Otto (17870) | more than 14 years ago | (#1583254)

If they fork the kernel, then they have the responsibility of maintaining their new kernel and integrating new features and so forth. Fine. They have that right, as long as all the source is available. Good for them! Code forks make the linux world a better place, because they cause the best options to be produced. Plus, standard linux can steal their code (the good parts) and integrate it back into the normal kernel if they want. Good too!

However, if they don't want all that responsibility, they can release kernel patches to be applied to the standard kernel to make it work with their system. Good too. Those may be eventually integrated into the standard kernel distribution, if they're worthy.

Either way, who cares? The ONLY entity this could hurt is TurboLinux itself, for the fear of being incompatible with the standard kernel. And that's not likely anyway..

This article is FUD.

---

This is GPL',ed - Linus and Alan Cox are not God (2)

Anonymous Coward | more than 14 years ago | (#1583255)

Look, this is all GPL'ed. So long as they ship the source, who cares. If TurboLinux have to tow the Linus Line, or are prevented from doing something by Alan Cox, we are back to the same position we have with Microsoft.

The kernel is already forking, with the Red Hat patches and now Turbo Linux. We are living in a dream if we think that Linus is going to control all those vendors from doing their thing.

And now, to keep the Moderators happy: "Linux is cool, /. is cool, I hate Gill Bates".

Re:And this is different from Redhat how???? (1)

mindstrm (20013) | more than 14 years ago | (#1583256)

In what way is the redhat kernel non-standard?
What closed-source driver modules?
Heavily pathced? As far as I've ever seen, RedHat simply compiles lots of modules, they don't use any patched kernel source to do it either.

As for modules, remember, binary-only (non-gpl) modules are only permitted if the hooks for them already exist in the kernel. If they don't, they can't do it.

Definitely not a problem... (1)

GaspodeTheWonderDog (40464) | more than 14 years ago | (#1583257)

This isn't anything new being done here. TurboLinux might have modified the kernel internals a bit and that is fine. As most Linux users know, you probably have no reason to go get the latest and greatest kernel anyway. If what you have now works fine then don't upgrade.



Users that choose to use TurboLinux should be made aware of the fact that these aren't 'official' kernel patches though. But as long as TurboLinux doesn't have to make an 'unofficial' kernel patch for every major kernel release I'd think that we'd be fine.



If the technology is cool enough to drive sales for TurboLinux anyway then more than likely it will be added into the basic kernel distribution wether TurboLinux or somebody else does it. Just because TurboLinux is showing us the way doesn't mean they have to provide the only solution.



Given enough market demand this will be included into Linux, otherwise it will take the back seat and get done when somebody is bored enough or wants an interesting project.

There is no danger in forking GPL software (2)

Bruce Perens (3872) | more than 14 years ago | (#1583258)

Forks of GPL software are different from forks of software with other licenses, because their source must always be disclosed, and the source can be incorporated into any of the other forks at any time. Thus, forks will tend to converge as good features from other forks are incorporated into them.

We're also not talking about a "fork" so much as a patch to the main kernel thread. There's little chance that this patch would be allowed to diverge from the main kernel thread, as it's easier for TurboLinux to maintain it as an add-on - otherwise, they have to maintain an entire kernel rather than just a patch.

A lot of the talk about the danger of forking the Linux kernel is FUD or ignorance of the licensing issues.

Thanks

Bruce

Re:And this is different from Redhat how???? (2)

Bakeneko (103174) | more than 14 years ago | (#1583259)

Well, it DID have the hardware RAID support prior to it being in the main kernel, and I believe it had some knfsd NFS compatability patches although I'm not too sure about that one. If one wanted to upgrade the kernel and was using features patched in, one did have to patch the kernel a bit. I haven't checked lately as to how patched it is (in 6.1)

Tim Gaastra

Re:Could be good *or* bad (2)

costas (38724) | more than 14 years ago | (#1583260)

With my little Beowulf-building experience, I have to say I agree with you; not only, a specialized "cluster node" kernel isn't a bad thing, it's probably a needed step: e.g. cluster nodes need not be too picky about security/authentication when talking to their 'master node', since that's all they talk to. We already have so many patches to deal with, it's unwieldy (TCP patches, memory patches, unified process space patches, etc...)

As long as (a) the changes are made public (and the GPL so far has ensured that), and (b) the 'cluster' kernel follows (closely ;-) the tech advances of the main kernel (e.g. SMP, a sore point so far in clustering) we should be ok...

Just my $.02

Forking Problem (0)

c-A-d (77980) | more than 14 years ago | (#1583261)

This could be a very dangerous situation. If LT doesn't incorporate this (provided it is a superior technology) then there will be a fork...

I, personally, don't know why the core kernel team wouldn't incorporate it if it isn't superior technology.

As for Sun taking flak for partially closing the technology... They should make the APIs known and allow a OSS version of the user space software to develop. Hopefully this will come out to benifit all users.

Re:Well... (2)

Sabalon (1684) | more than 14 years ago | (#1583262)

While polite, I though that was the whole GNU idea - take the code, do what you want, release it.

Clustering Technology (1)

Elik (12920) | more than 14 years ago | (#1583263)

Well... from what I have seen posted all around so far of late.... it seems prudent to wait to hear from Linus on his decision regarding the high availability clustering technology that TurboLinux have created. If TurboLinux do release the code regarding that code, and Linus do accept it, then all much better for all of us.

So... let wait and see til we hear from Linus or Alan Cox regarding that technology. But it would be good uses to have that feature added into the kernel. Speaking of that...what up with Beowolf Clustering technology? I have not seen that being incorporated into the Kernel and only been released and available from Redhat as Extreme Linux if I remember. Correct me that if I am wrong on that part.

But anyhow... go and bug Linus and Alan Cox and pressure TurboLinux to contribute the entire code for the HA Clustering technology to the Kernel source tree so we can take a shot at it and improve it if needed to make it better than it is right now. That is the whole point of the Open Source Movement... for everyone to contribute to it and improve on it to make it better instead of closed source like Microsoft NT currently is which is downright buggy. Heck.. it trips up and dies when you try to poke at it with a stick. :)

Remember libc5 vs. GNU libc? (5)

Bruce Perens (3872) | more than 14 years ago | (#1583264)

We had a major fork of the Linux C library, "libc5", vs. GNU LIBC. That fork was resolved once the Linux fork had a chance to mature. Its modifications were incorporated into the main thread.

Folks, we've been here before. The forks converged. There's no reason that future forks of GPL software will not converge.

Thanks

Bruce

Startling News (3)

Anonymous Coward | more than 14 years ago | (#1583265)

and Giganet Inc. of Concord,Mass., for ``VI'' software that allows the cluster nodes to communicate with minimal overhead on the processors.

that must be some new functionality I wasn't familiar with. Thanks, Computerworld!

Oh, and Take _That_, emacs!!

:)

Re:RedHat/SCO (0)

Anonymous Coward | more than 14 years ago | (#1583266)

I have been using UNIX professionally for 10 years.
SCO is GARBAGE.

If they think people aren't buying that, that's THEIR problem. I'm buying it.

As for SCO.. well...
If customers come to you, and you offer them what they want, then good for you. If they stick with SCO over RedHat because SCO offers what they want, that's FINE! That's what a free market is all about!

Personally, I haven't seen a reason yet (other than the running of proprietary accounting software) to use SCO over linux.

well... (0)

Anonymous Coward | more than 14 years ago | (#1583267)

In many ways Linux has already forked (I know the kernel hasnt) but with the numerous distributions and what not. I can understand different visions and forking of an OS like BSD has... I cannot understand the numerous distributions of linux. To me, it makes things a mess and there really needs to be more of a standard. (Packages and what not) but as it has been proven before, trying to settle on that causes massive amounts of arguing and flame wars. This reply is not intended to do that, just expressing my frustration with the lack of more standards and better orginization in the Linux world.

*sigh* (1)

heh2k (84254) | more than 14 years ago | (#1583268)

whoever said this is "unusual" obviously doesn't know what they're talking about. linux has "forked", more or less, many times, almost always ending up included in linus's tree. examples: RTlinux, mklinux, the L4 port, redhat has including kernel patches (if you want to count that), and most archs are forked to a certain degree (ie, not always synced to linus, and especially when they're first developed).

What danger? Geez. (4)

Kaz Kylheku (1484) | more than 14 years ago | (#1583269)

They make it sound like someone is jumping out of an airplane on a motorcycle or something.

So what if TurboLinux forks the kernel? They will either die out or have to keep a parallel development stream whereby they keep taking mainstream kernels and patch their changes onto them. No big deal. There are nice tools for this, like CVS update -j or GNU patch. Eventually, their stuff will mature and may be accepted into the mainstream.

Forking happened before (anyone remember SLS?).

I think that for any significant feature to be added by an independent software team, forking *has* to take place. In fact, Linux is continuously sprouting many short-lived forks. Any time a hacker unpacks a kernel and does anything to it, wham, you have a tiny fork. Then when it becomes part of the stream, the fork goes away. To create a significant feature, you may have to have branch a much longer-lived fork. And to let a community of users test that feature, you *have* to release from that branch. Now crap: you are ostracized by the idiot industry journalists who will accuse you of fragmenting the OS.

Linus *can't* integrate Turbo's changes until those changes are thoroughly hammered on by Turbo users, so a fork is required. The only kinds of changes that Linus can accept casually are ones that do not impact the core codebase. For example, if someone develops a driver for hitherto unsupported device, great. The driver can only screw up kernels that are built with it, or into which it is loaded. Just mark the driver as very experimental and that's it.

Conditional compilation and patching (1)

xmedar (55856) | more than 14 years ago | (#1583270)

Surely if TurboLinux et al have done their design correctly it should be possible to patch any kernel source with this code as the overall design is not going to radically change.

Forking *is* bad; see GCC and other projects (4)

Christopher B. Brown (1267) | more than 14 years ago | (#1583271)

There's not much question but that there are some significant demerits to code forks. The plethora of mutually-incompatible patches to GCC that resulted from people supporting forks for:
  • Pentium optimization
  • Trying to support C++
  • FORTRAN
  • Pascal
  • Ada
  • Special forms of optimizations (IBM Haifa stuff, for instance)

The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.

I do support of some R/3 [hex.net] code where our local group has "forked" from the standard stuff SAP AG [sapag.com] provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.

I would consider a substantial fork of the Linux kernel to be a significantly Bad Thing. [tuxedo.org]

Note that if it forks, the Turbo version may have a hard time supporting code written for the non-Turbo version. Major things that are likely forthcoming include:

  • New filesystem support, including LVMs, ext3, Reiserfs, SGI's XFS
  • New devices such as network cards, SCSI host adaptors, USB devices
  • Further support for 64 bit architectures, and support for 64 bit structures on 32 bit architectures ( e.g. - solving such issues as the 2038 Problem and the 2GB File Size Limit Problem and the 2GB Process Size Limit and such)
Deployment of such facilities would be substantially hampered by a kernel fork.

fork() (3)

Skyshadow (508) | more than 14 years ago | (#1583272)

This isn't the only place this is happening, yano.

I have a friend who works at SGI, and we were just talking the other day about how their development people have been frustrated lately about their inablility to get certain scalability-oriented bits included in the kernel. So, essentially, SGI's Linux is headed for this same sort of fragmentation for the same sort of reason.

I told 'em that if he killed Linux I'd slash his tires, but I don't think he took me seriously.

We in the community have nothing to fear but fragmentation itself. The 10,000 faces of UNIX is what originally killed it as a server operating system -- that's why I refer to Linux as being the Second Coming of UNIX so often. The really key thing is that it runs on a common platform (Intel) and it's not the mess that the commercial UNIXes evolved into during the last decade.

I don't know how to stop this from happening, only that it must be stopped.

----

Reminds me of Faust (1)

doomy (7461) | more than 14 years ago | (#1583273)

Here we have the usual characters, SCO and D.H Brown. Both brothers to Mephistopheles and both seeming to do soemthing they have never done before, oh god, support linux? And that too in a potention fork? This does reek of a certain vain story does it not? How do you make the kingdom fall? you infect it, ofcourse.

And such infections as a bastard child, would most surly help sour the apples. Think of much to do about nothing, (Dude!).

And ofcourse there are those, who say that better things could happen from such a fork. But has that always been true? Somewhat.. there are certain benifts and certain uncertanties associtated with forking.

Then comes the question of clustering, why does, Turbo Linux ppl want their clusting solution to be part of the the official Linux kernel? There are serveral such solutions that predates. Those never wanted to be part of the kernel (for a good reason too). Were these people set up from the beiging to fork? And notice when they announced this, just when a kernel freeze was in place. How convientinet and how easy can they fork now. But wouldnt just a patch help? I'm sure that's was ExtremeLinux, BeuWolf does.

And thus,

To fork or not to fork, that is the question..

--

Re:And this is different from Redhat how???? (1)

artg (24127) | more than 14 years ago | (#1583274)

Redhat kernels (at least, the ones I tried ..) are not identical to the standard ones, and so the standard kernel patches can't be applied. This is a nuisance, but only to Redhat users who have to download a huge rpm instead of a few 100K patch file : it doesn't hurt anybody else.

The only other problem I've had is that Redhat initscripts require build-specific System.map and module-info files. The stock release doesn't create those, so you have to bodge around it. Maybe this is documented properly somewhere now - if so, I haven't found it yet. Again, a pain only to Redhat users.

Not that I agree... (1)

JeffI (87909) | more than 14 years ago | (#1583275)

If it came across that I was agreeing, I was not. My point was that if they (turbo linux) were sharing the view points of the analysts, in that they feared there may be a fork... then they should have submitted their changes to Linus first. But yes... I agree with you, that is the whole idea of GNU. So again, I say good job, and good luck to them.

Re:Could be good *or* bad (2)

Raven667 (14867) | more than 14 years ago | (#1583276)

>much as how every article on /. has a comment >saying "Man, I'd like a Beowulf of these babies," >most of the people saying that never will have a >Beowulf or a need for a clustered system. (I >mean, come ON, what would you, personally, use >all that computing power for?)

PovRayQuake of course!

That is for the people who aren't simulating nuclear explosions of their neighbors dog.

Re:Remember libc5 vs. GNU libc? (2)

doom (14564) | more than 14 years ago | (#1583277)

Do you have any insight into why the Gnu emacs
xemacs split is staying split?

I actually think that this subject is really
interesting... it would be really good to have
someone do some serious historical research
into code forks.

In particular, I suspect that BSD-licensed
software is more suceptible to code forks
than GPL software, because of the temptation
to do proprietary closed source forks. It'd
take more knowledge than I have to pin down
whether this is really the way it works.



Re:And this is different from Redhat how???? (4)

Blue Lang (13117) | more than 14 years ago | (#1583278)

Maybe these guys can explain to me how the inclusion of Pacific TurboLinux's unblessed kernel patches to support
clustering is any different from the non-standard kernel that ships with Redhat.

Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution
which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the
heavily patched kernel source that ships with Redhat Linux.


Please don't moderate total falsehoods like this up - this is flamebait. Alan Cox, the actual primary code architect of the Linux Kernel, is a Red Hat employee. While RH does often ship a 'tweener' kernel, or one that is in some state of AC's patches, there is nothing at all non-standard about it. They simply ship the newest build that they have on hand at the time of pressing. They occasionally even update the kernel image during single revisions.

And, if I'm wrong, please reply with a list of drivers or patches that RH has included since, say, 4.0 or so, that weren't available as kernel.org + current AC patch.

Secondly, IMHO, SCO's CEO need a lot more fiber in his diet. You could randomly take away every other file in Red Hat's distro, ship it, and it would STILL have 'more value' than SCO.

Re:RedHat/SCO (3)

SoftwareJanitor (15983) | more than 14 years ago | (#1583279)

Doug Michels shouldn't be expected to say anything else, but I don't see how he can expect anyone who has seriously used or evaluated both SCO's products (OpenServer and Unixware) and Red Hat's product. Obviously he is speaking to PHB's who don't know enough to dismiss his argument outright.

Certainly in price/performance, there can be little dispute that Red Hat beats SCO for commercial use in all but the most extreme circumstances. SCO's products are very expensive if you purchase all of their debundled pieces that it takes to match what you get in a Red Hat box for under $100. Let alone user based license fees. And even if you purchase all of SCO's commercial offerings, you still end up having to add a significant amount of open source to really make it comparable to Red Hat's offering, and that is all extra work.

Michel's point about Red Hat not adding extra value is misleading. It doesn't matter whether Red Hat themselves add value (as opposed to other Linux vendors such as SuSE or Caldera), but what the overall value of the package is. There is no doubt in my mind that the overall package from Red Hat for most people has a much higher value than what you get from SCO, and at a small fraction of the price.

Re:And this is different from Redhat how???? (2)

maynard (3337) | more than 14 years ago | (#1583280)

Redhat kernels (at least, the ones I tried ..) are not identical to the standard ones, and so the standard kernel patches can't be applied. This is a nuisance, but only to Redhat users who have to download a huge rpm instead of a few 100K patch file : it doesn't hurt anybody else.

The only other problem I've had is that Redhat initscripts require build-specific System.map and module-info files. The stock release doesn't create those, so you have to bodge around it. Maybe this is documented properly somewhere now - if so, I haven't found it yet. Again, a pain only to Redhat users.


My point exactly... just compare a .depend made from a make config on a pristine kernel to one made with a Redhat supplied kernel to view the differences. This is not a value judgement against Redhat for including non-standard kernel patches with their product, they have every right to do so. Just as Turbo Linux has every right to modify the kernel and include non-blessed patches with their product, as long as they don't break the terms of the GPL. This is a non-issue, as so many others have stated.

s/.depend/.config (2)

maynard (3337) | more than 14 years ago | (#1583281)

See top... my mistake.

Re:Could be good *or* bad (2)

regs (18775) | more than 14 years ago | (#1583282)

As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)


Oh, I don't know... say, a Beowulf and a CD-ROM jukebox that could take in 200 CDs and spit out CDs filled with MP3s of the CDs in under an hour.



--

Re:Could be good *or* bad (1)

artg (24127) | more than 14 years ago | (#1583283)

And 6 months later, when there are plenty of major forks in existence, and Linux is still going strong with no damage as a result, the FUDders will look pretty silly ..

Re:*sigh* (1)

otis wildflower (4889) | more than 14 years ago | (#1583284)

Don't forget the PCMCIA driver pkg.. It works quite nicely as a kernel 'add-on'.. I can't see why any non-canonical kernel mods couldn't be packaged similarly, as long as the kernel builder has the smarts enough to manage it.. (and don't laugh, but the first time I upgraded my redhat laptop's kernel from 2.0.36 to 2.2.1 I broke PCMCIA and other bits, but I learned how to fix it after Reading TF HOWTOs.. Anyone building a HA solution had better know how to build and maintain a kernel!)

Your Working Boy,

Re:TurboLinux extensions (1)

Jay Maynard (54798) | more than 14 years ago | (#1583285)

[...] unless they provide some actual functionality to non-Turbo Linux clustering users, they *shouldn't* be incorporated into the main kernel fork.

I disagree, for a simple reason: Putting the functionality into the main kernel would allow the development of open source userland tools to do the same job as the tools TurboLinux is releasing as proprietary. How long do you think it'd be before something like that sprang up?
--

So, is ComputerWorld just clueless or... (0)

Anonymous Coward | more than 14 years ago | (#1583286)

... is it another Microsoft ActiveFUD (TM) post?

Clueless is OK. Most journalists are.

And this prevents using "standard" kernels how? (2)

Christopher B. Brown (1267) | more than 14 years ago | (#1583287)

I would be concerned about the customization if it prevented me from compiling my own kernel and using that instead.

I've not done a fresh install of RHL since 5.1, so "perhaps they've gotten tremendously more proprietary since," but I rather doubt that.

The concern with TurboLinux customizations is if this makes TurboLinux kernels not interoperable with other kernels.

This will only matter if people adopt TurboLinux in droves; if they do their thing, producing a bad, scary forked kernel, and nobody uses it, this won't matter. It's not like the "tree in the forest;" if nobody is there using TurboLinux, nobody cares about a disused fork.

who frickin' cares? (2)

jnazario (7609) | more than 14 years ago | (#1583288)

i mean, seriously. who frickin' cares what Linus says? you have the code. don't like it? who frickin' cares, incorporate your own changes. i do, and i love it. Linus' needs are what drive kernel development, not overall needs and issues. the PCMCIA shit should teach you that, as should the lousy IP stack implementation. it's about time someone stand up to this BS development model and actually do something based on performance or whatnot in a big way. the current model of "Well, it's Linus' OS" is a surefire way to stagnate development.

Re:TurboLinux extensions (2)

Fnkmaster (89084) | more than 14 years ago | (#1583289)

That depends on how much functionality is in their userland utilities. If it is the case that these utilities are easily implementable, then my comment that *if* the improvements provide functionality to the rest of us, they should be added to the kernel would apply. Clearly if the userland utilities are a small part of the HA clustering technology that we could implement, then we obviously should add the TurboLinux kernel code to the primary fork. If however they are keeping all the meat to themselves and essentially adding a minimal amount of functionality to kernelspace, then there's no reason to.

The point is, since I haven't seen the source nor heard from a more technically sophisticated source than this article, I don't know how much stuff they are using in kernelspace. However, I have the utmost faith in the kernel maintainers (Linux, Alan, etc.) and the desires of the Linux user base as a whole to direct patch incorporation into the kernel in the most appropriate way. What I said still holds: if their patch adds value for us (or can be made to add value with reasonable amount of effort), then by all means it should and will be put into the main kernel fork.

Re:fork() (3)

Parity (12797) | more than 14 years ago | (#1583290)

I know how to stop it from happening, but I don't have the power to -do- so.
Just get Linus &co. to add all the 'inferior' patches to the kernel and put them in as non-standard build options...

Build with SGI scalability extension (non-standard) [y/N]?
Build with TurboLinux clustering extensions (non-standard) [y/N]?

Maybe give them their own 'non-standard extensions' section with warnings that enabling these extensions may break things, these extensions are not as thoroughly tested as the 'main' portion of the kernel, etc, etc.

It's not like there aren't unstable/experimental build options already.

Re:And this prevents using "standard" kernels how? (2)

maynard (3337) | more than 14 years ago | (#1583291)

I would be concerned about the customization if it prevented me from compiling my own kernel and using that instead.

And how are you prevented from compiling and booting a standard "blessed" linux kernel on Pacific Linux? You may lose the clustering capabilities, but that's no different from compiling a non RAID enabled kernel on a system which depending on the RAID capabilities which were included as non-blessed patches in previous Redhat releases.

I know about one change mentioned (2)

wmshub (25291) | more than 14 years ago | (#1583292)

The "VI" system mentioned in the article is probably one of the changes. I have never used VI under Linux, or VI with the Giganet hardware, but I wrote the original VI prototypes for Windows NT. It's a communication system that gets lower latency than the kernel TCP/IP stack by exporting some hardware registers directly to the user applications, allowing them to send and recieve network data without ever doing a kernel call. You need special hardware to do this without creating huge security holes of course! You also need an extra kernel interface to allow the user program to pin/lock some amount of virtual memory, and a special user-level communications stack. This can't be used to talk to computers across the internet because it doesn't use IP protocols. But if you have a cluster application where message latency is critical, it can give you a big performance boost.

PS - This was a much bigger benefit under Windows NT, where the system call overhead was much higher than it is under Linux. But it should still help out Linux.

Of course it is! (1)

Anonymous Coward | more than 14 years ago | (#1583293)

Remember where "Gartner kiss of death" came from?

IDG == FUD
IDG == FUD
IDG == FUD
IDG == FUD
IDG == FUD
IDG == FUD
IDG == FUD
--More--(0%)

It'll have to wait till 2.6 (1)

bug1 (96678) | more than 14 years ago | (#1583294)

Since 2.3 has been feature frozen for a while now, so by rights it wont be in 2.4.

There are plenty of usefull feature waiting to get into the kernel.

For example new style RAID (v0.9) and the big ISDN
patches. These features have been standing in line waiting to get in, and are more usefull to the general public than kernel support for a proprietry userland program(if thats what it is).

But i have faith in the powers to be, they kernel maintainers do what they think is right for the kernel one way or the other. I dont think this attempted threat of a fork will sway theyre decision whatsoever.

Re:RedHat/SCO (0)

Anonymous Coward | more than 14 years ago | (#1583295)

I don't know how RedHat compares to SCO (as I've never used the latter), but don't forget that RedHat only develops about 10% of the code that is in the package whereas SCO develops its own kernel and everything.

Re:Excessive Credit? (2)

H-Monk (34820) | more than 14 years ago | (#1583296)

MOSIX was designed to distribute multiple processes throughout several machines.
It really isn't useful in a network server environment, but it's very useful for computation-intensive work (especially work that doesn't need to hit the disk that much). Actually, besides some difficult security concerns, MOSIX may even make network server software less efficient.


For TurboLinux, from what very little I know about it, the opposite is true (it's designed only for internet server things).

The stuff TurboLinux is doing doesn't seem earth-shattering to me, either. Usefull maybe, but many others have or are doing similar things that might be better.


Now what would be great is to have for Linux what what VMS had (to be more specific, it was OpenVMS, I think), it would have some exciting consequences.

Linus' needs... (1)

Yogurtu (11354) | more than 14 years ago | (#1583297)

"Linus' needs are what drive kernel development, not overall needs and issues."

Well I sure want to have 'needs' like those...
As for the PCMCIA and IP implementations, I dunno: did you actually do better and got your changes rejected?

JM

They're using VI for *that*? (2)

scjody (19861) | more than 14 years ago | (#1583298)

...and Giganet Inc. of Concord,Mass., for ``VI'' software that allows the cluster nodes to communicate with minimal overhead on the processors.

Wow. VI has always been my choice for situations when I didn't want the overhead of EMACS, but I didn't know it did clustering! :) :) And who are these Giganet people? Is that like nvi or vim?

Let's get this straight... (4)

jd (1658) | more than 14 years ago | (#1583299)

  1. Any changes TurboLinux make to the kernel must be made available, in their entirity, to all other developers and distributions, under the GPL.
  2. If people like the mods and Linus chooses to not use them, then the distributers will simply package them up with the distributions anyway, so there's no fragmentation.
  3. If people like the mods and Linus chooses to use them, there's no fragmentation.
  4. If people don't like the mods, it doesn't matter, as nobody'll make use of them.
  5. This is not significantly different from any of the "non-standard" kernel patches that are provided, be it from Alan Cox (who's patches are worth two or three "official" ones), or anyone else. (PPS is unlikely to make it into the kernel. Nor are any of the ACL patches. The International patches and IPSEC can't, until there's worldwide agreement on crypto tech. The E1000 patches from Intel aren't being offered to be part of the kernel. Nor were the Transputer patches.)
  6. The whole point of Open Source and the GPL is that you have evolution, and evolution requires evolutionary pressure. You only get that when the environment changes, or alternatives are competing with each other.

Re:Fork? Big deal (2)

Panaflex (13191) | more than 14 years ago | (#1583300)

I agree. TurboLinux can take whatever business tact they want, as long as they stick to their licensing.

But I don't like the paragraph..

There is precedent for Torvalds quickly deciding to incorporate changes to the kernel produced by commercial developers, Iams said. Engineers at Siemens and Linux distributor SuSE Inc. provided a 4G-byte memory extension that Torvalds incorporated.

This seems to be a backhanded swipe at Linus. They make it seem as if Linus should do it because he did it for someone else. Well, SGI has had a bunch of patches rejected( http://oss.sgi.com/projects/ linuxsgi/download/patches/ [sgi.com] ). So have ALOT of others. Tough luck... But A Precedent?

Media Pressure on Linux is dirty, ignorant, and non-productive when you say someone should be doing this. Computerworld sucks and blows at the same time.

Re:There is no danger in forking GPL software (1)

doomy (7461) | more than 14 years ago | (#1583301)

I believe (if i'm not mistaken), the Turbo Linux changes concern front end patches (much like any other patches) to the kernel that run in kernel space and certain user space packages (that would remain commerical and possible be given out in binary only format, which is really silly)

What we should really concentrate on should be those userspace packages and not any kernel additions. The kernel is safe by it's nature.
--

Re:Remember libc5 vs. GNU libc? (2)

Alan Shutko (5101) | more than 14 years ago | (#1583302)

Emacs and XEmacs are staying split mostly for two reasons.

The first is that RMS won't put any sizable code into Emacs without legal papers assigning copyright to the FSF or placing the work in the public domain. (One line bug fixes are ok, though.) Given that RMS has been burned in the past, this is an understandable position. But it does mean that he can't simply lift code from other GPLed stuff (ie, XEmacs) without the author signing said papers. Since XEmacs doesn't do this, the specific author of a piece of code isn't always known, or may be difficult to contact.

The second reason is due to a personality conflict between certain XEmacs developers and RMS. Since I'm not a party to any of the conflicts, I can't comment in detail, but it does make getting those legal papers a bit more difficult (read as "hell will freeze over first").

Malicious use of moderation today (1)

Bruce Perens (3872) | more than 14 years ago | (#1583303)

Someone moderated that down as "Troll"???

Re:RedHat/SCO (1)

yorkie (30130) | more than 14 years ago | (#1583304)

SCO UNIX (at least the releases I have used) and prior to that Xenix were very poor implementations of Unix.

When I had to support a number of SCO systems it was very buggy, and difficult to install on fairly common hardware platforms.

Back in 1992 I supported a number of Xenix systems. There was a well known bug in the fs code that caused the superblock to erroneously report too little free space, and requiring a regular fsck. Could SCO be bothered to fix it - no!

Later we switched to SCO unix - this was equally poor. For example installing and removing a LPD aware version of the print subsystem (via a couple of MKDEV scripts) completly messed up the entire print system, requiring a reinstallation of base OS packages. The cause of the fault was an error in the script - it made a backup copy of the executables, but copied the files back. A bit of care could have prevented the whole problem. There were some other equally hairy faults, but the mists of time has clouded my memory.

The biggest pain was having to pay more for the compiler, TCP/IP and NFS - each was then a different package.

Obtaining patches from SCO used to be very difficult, with a list of files but no description of what they did. Support in the UK was non existant - we had a 3rd party to go through, who like 99% of 3rd party support organisations knew sod all. (It was made worse by my boss selling a SCO support contract which covered both our software and the OS to a major 24 hour site).

SCOs lack of QA testing for the OS lost them (and UNIX in general) at lot of users. The sooner SCO disappears from the planet, the better!

TurboLinux's Kernel (5)

docwhat (3582) | more than 14 years ago | (#1583305)

Hello!

I am the kernel maintainer for TurboLinux. I'd like to dispell a few myths here:

  • The kernel isn't "forking" from what Linus distributes anymore than Debian, Redhat, SUSE, etc. do. We add extra patches for enhanced functionality, like raid, IBM Serveraid, etc.
  • The actual kernel patch that is used by TurboCluster is *in the kernel rpm*. You can grab the source rpm and look at it.
  • The TurboCluster was based upon the Virtual Server in the beginning. Since then we have hired a company to re-write it from scratch. There is nothing left of VS in the Cluster code, except some concepts (but none of their code). Did I mention it is GPL'ed in the source.
  • Did I mention that all the patches are available from the kernel source RPM?
  • At some point, the Cluster module will be submitted to Linus. However, we only know it works for 2.2.x. I *will* submit it for 2.3 and 2.5 (if it doesn't make 2.3), but I am in the process of re-writing the kernel RPM and am very busy. It needs to have all the CONFIG options and such added in, and checked to work in 2.3.x.
  • The TurboClusterD (the only non-GPL part of TurboCluster) will be OpenSource'd in the future. Our current plan (this is *not* an official commitment) is to release it as the next version comes out. The next version will be much better, of course.

I hope this addresses some people's concerns. Don't worry, I am **very** pro-GPL and am responsible for sanity checking these choices.

Ciao!

(aka Christian Holtje docwhat@turoblinux.com [mailto] >)

Re:Forking *is* bad; see GCC and other projects (3)

David Greene (463) | more than 14 years ago | (#1583306)

GCC is not a good example of a code fork problem. If anything, it proves the value of the ability to fork.

GCC became forked because the FSF sat on changes that were being submitted. For years. EGCS was an attempt to get working C++ code out to the general public (Cygnus had been releasing it as part of GNUPro for some time). EGCS literally saved the project I was working on and I'm sure it did the same for others.

Now that EGCS and GCC are back together as one, some of the other forks are being rolled in (Haifa, FORTRAN and Ada for sure, though I don't know what's happening with PGCC).

The act of forking caused the FSF to get off their collective duff and do something. That's a Good Thing [tuxedo.org] .

--

Re:RedHat/SCO (1)

jemfinch (94833) | more than 14 years ago | (#1583307)

so you'd buy a bicycle built from scratch rather than accept a car for free?

Re:There is no danger in forking GPL software (2)

Bruce Perens (3872) | more than 14 years ago | (#1583308)

I think it's a non-issue. Open Source versions of the same facilities are already at least in part there, whatever is missing should be filled in soon enougn.

Bruce

Re:Fork? Big deal (2)

mochaone (59034) | more than 14 years ago | (#1583309)

While Linux is considered a Unix clone, keep in mind two big difference.

1) Linux has always been open. The Unix vendors, on the other hand, released commercial, propietary, closed OS's.
2) Linux has a clearly defined "lead" developer. Unix vendors were led by nameless businessmen.

Regardless of whether TurboLinux' changes are the greatest thing since sliced bread, if Linus doesn't think they deserve inclusion into the next kernel release, it will go off on its own and sort of do a slow death-dance. Linus, along with his horde or developers, has gained the respect of developers and business folks and are accepted as the true stewards of the Linux system. There is no one else around who can claim equal credibility and usurp momentum from Linus and gang.

The Unix vendors ran into trouble when they started to incorporate propietary code into their versions and closed development. Linux will never encounter this problem. Anything based off the linux kernel base can be re-incoportated into the kernel.

Linux is in no trouble from code forking at all.

Re:Forking *is* bad; see GCC and other projects (0)

David Greene (463) | more than 14 years ago | (#1583310)

GCC is not a good example of a code fork problem. If anything, it proves the value of the ability to fork.

GCC became forked because the FSF sat on changes that were being submitted. For years. EGCS was an attempt to get working C++ code out to the general public (Cygnus had been releasing it as part of GNUPro for some time). EGCS literally saved the project I was working on and I'm sure it did the same for others.

Now that EGCS and GCC are back together as one, some of the other forks are being rolled in (Haifa, FORTRAN and Ada for sure, though I don't know what's happening with PGCC).

The act of forking caused the FSF to get off their collective duff and do something. That's a Good Thing [tuxedo.org] .

Linux would not be forked for the same reason (I hope!). It would be forked if Linus and others didn't like the TurboLinux changes. This (hopefully) prevents cruft from entering the mainstream kernel.

--

Re:Short and sweet.. (0)

Anonymous Coward | more than 14 years ago | (#1583311)

It's short and sweet.

Not sure if it's a clitoris, of course.

Re:A Problem, Really? (0)

Anonymous Coward | more than 14 years ago | (#1583312)

Well, VA has developed really cool open source clustering management software. The only drawback is it required Intel motherboards with the EMP server management port. This is the only real way to do it without buying additional hardware. It's really cool though, you can halt and boot machines (staggered or at once), distribute upgrades across the cluster, monitor performance, hardware settings, errors, anything. Pretty neat stuff... VA has been doing some cool cluster stuff as well, they ship with a tuned kernel as I understand it. I wonder what the difference is between what they do and what Turbo Linux does is...

Xemacs and Emacs (3)

Bruce Perens (3872) | more than 14 years ago | (#1583313)

That's an FSF-specific issue. Linus doesn't insist on the same copyright sign-over. That, by the way, effectively locks Linus (and everyone else) into the GPL version 2, which most people believe is a good thing. Now that there are so many contriubtors, it's just not possible to get everyone to agree to change the license. No doubt some of those copyright holders have died, etc.

Thanks

Bruce

Ain't no big thang.. (1)

fuerstma (15683) | more than 14 years ago | (#1583314)

I don't really see what the problem here is. It's not like people that are going to be going this route and paying all this money for this level of performance are going to expect to download the Netscape RPM from redhat.com and have it install cleanly. Obviously things like these clustering technologies are going to be used in places where custom software is going to be written to take advantage of the technology. In the end, the company that forks over all the dough to create such a technology is going to be the one hurting if they then decide to go back to a "standard vanilla" Linux-clone.

The more diversity the better. This can better serve the market for people that need the clustering technologies, not for joe-blow hobbyist... More Linux around the better...

Re:Fork, schmork...but... (0)

Anonymous Coward | more than 14 years ago | (#1583315)

However, these other distributions serve specilized areas for OS' and still are free! They don't charge money for a proprietary implementation.

Re:Well... (0)

Anonymous Coward | more than 14 years ago | (#1583316)

Yes, you CAN. That doesn't mean it's good practice... But what has been done before, and will be done again, is that someone fork the kernel, do their changes, and when their changes are mature enough they are accepted back into the kernel, or something similar is.

It's only a problem if the forked kernel gain a significant userbase, and the changes affect other software...

Re:*sigh* (0)

Anonymous Coward | more than 14 years ago | (#1583317)

The 'aftermarket' bolt-on PCMCIA kludge in Linux was one of the reasons I installed NetBSD instead of Linux on my laptop a year ago.

PC-Card ethernet is compiled right into the NetBSD kernel, so doing an NFS install of NetBSD onto a laptop isn't a crisis like installing RedHat is. (well, RedHat anything is a crisis waiting to happen, IMHO)

Re:There is no danger in forking GPL software (2)

Chris Johnson (580) | more than 14 years ago | (#1583318)

To be specific, though it is possible (if you're a corporation) to not only fork anything GPLed but also have big teams of programmers working on it full tilt without disclosing their information, when the product is released they _do_ have to release the information.
It's possible to maintain such a fork in 'no cooperation' mode indefinitely, but at a very crippling cost- to keep it under total control you'd have to be changing things radically enough that no outside influences would be relevant. Otherwise things would converge. Particularly with regard to the Linux kernel, even a _hostile_ attempt to fork it and take over control is a losing game, requiring a really large amount of effort for a very unimpressive return. Yes, if you're a corporation you can devote more resources to a private development than individuals can, but then you have to release source (and not obfuscated, either) and this makes it difficult to use this mechanism for more than hit-and-run marketing games.

Re:RedHat/SCO (0)

Anonymous Coward | more than 14 years ago | (#1583319)

So you've used every version of SCO, except UnixWare, which is their modern product. I know that SCO has a history of suckyness, but isn't UnixWare pure System V UNIX by way of AT+T, Sun, and Novell? Has anyone here even tried it?

Re:Short and sweet.. (1)

mochaone (59034) | more than 14 years ago | (#1583320)

You are correct sir ! === lame Ed McMahon impersonation.

Re:What changes need to be made? (4)

docwhat (3582) | more than 14 years ago | (#1583321)

Aaahhhh! No! I refuse to fork the kernel! ;-)

We are overworked as is. I will not, as TurboLinux's Kernel Maintainer (Kernel Colonel?), fork the kernel off. Having Alan Cox, and the wonderful crew in Linux-Kernel maintian the core stable kernel makes my life *much* easier.

The Cluster Module is just a module! It can be compiled in later after the kernel is done. It cannot (yet, as far as I can see) be compiled into the kernel as a non-module.

Feel free to grab the cluster module and see for yourself (You'll need to hold shift):
cluster-kernel-4.0.5-19991009.tgz [turbolinux.com]

Ciao!

Re:Malicious use of moderation today (1)

irix (22687) | more than 14 years ago | (#1583322)

Whoever has been doing this will get theirs in meta-moderation.

You can also e-mail the cid to Rob and he will probably revoke their moderator access. This is definately abuse.

Re:Short and sweet.. (0)

Anonymous Coward | more than 14 years ago | (#1583323)

Kitsune, I know you can't read this, but you have no penis.

No, anon-boy, he'll always have you.

Re:Forking *is* bad; see GCC and other projects (3)

JordanH (75307) | more than 14 years ago | (#1583324)

The plethora of mutually-incompatible patches to GCC that resulted from people supporting forks for:
  • Pentium optimization
  • Trying to support C++
  • FORTRAN
  • Pascal
  • Ada
  • Special forms of optimizations (IBM Haifa stuff, for instance)

The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.

All of the things you mention above are good things to support. They all have their market and perhaps none of them would have been available had we waited for complete consensus among all GCC developers to bless every change.

Code forks are just healthy competition. Remember that? Competition?

You fail to mention that a lot of these things were eventually folded back into the latest GCC versions.

The EGCS split was eventually folded back into the mainline, and the result is a better GCC, I think. People were allowed to go their own way, proving their approach good and when the fork was unforked, it benefitted everyone.

I do support of some R/3 code where our local group has "forked" from the standard stuff SAP AG provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.

Oh, so you're having problems with parallel changes. Hmm... This is bad. I know. Don't make any local changes! Use the SAP out-of-the-box. Whew! That was easy, problem resolved, the badness of a code fork vanquished once and for all.

What's this I hear? You need those changes? Those changes are there for a good reason? Oh, well, I guess nothing worthwhile doesn't have a price, eh?

Sure, it's a bear to syncronize parallel updates, but that's no justification to never fork.

The ability to fork is an important aspect of the software's essential freedom [fsf.org] . If we never fork, we're possibly missing out on important development direction that would be missed.

Besides, there already are a number of Linux code forks out there. People are still developing in 2.0, 2.1, 2.2 and now 2.3 and 2.4 kernels. Each of these represent a fork. When someone improves a 2.2 kernel in some significant way, someone will probably try and integrate those changes into 2.3 and 2.4 kernels.

What people are really concerned about here is that Linus will no longer be have control over the forks.

My guess is that Linus would welcome the contributions. Remember that anything these TurboLinux people might do would be available to be merged into a Linus blessed kernel in the future.

Hey, if these are real improvements, I'm just glad they're putting them into a GPL OS rather than doing them (again and again) to some proprietary commercial OS.

The forks that have occurred in the *BSD world haven't seemed to hurt them. *BSD is gaining support all the time, we read. The various *BSD projects have learned a lot from one another. The only forks in *BSD that one might argue don't contribute to the Open Source world are the ones by BSDI and other commercial interests. Even these have probably helped popularized *BSD operating systems.

Re:Let's get this straight... (0)

Anonymous Coward | more than 14 years ago | (#1583325)

1.Any changes TurboLinux make to the kernel must be made available, in their entirety, to all other developers and distributions, under the GPL.

Well, either they do, or the GPL fails when it sees it's day in court.

It hasn't seen it's day in court. Probably this more than anything else is why buying Red Hat stock right now is a speculative venture.

I think fun times are head. I think it would be more interesting if GPL was nullified by some legal cases. That's just me, of course...

The real issue is... (2)

chuckw (15728) | more than 14 years ago | (#1583327)

The real issue is how much the commercial world can pull on Linus's reins. These capabilities should be in Linux but only if it makes sense. If Linus evaluates them and they agree with his overall vision for the Linux kernel, then by all means, they should be included. If he incorporates them because he fears a code fork, he sends the message that he can be manipluated by some large entity. I look forward to seeing how this turns out.
--
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>