Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Kernel to Fork?

CmdrTaco posted more than 9 years ago | from the remember-when-the-kernel-spooned dept.

Operating Systems 578

Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "

cancel ×

578 comments

Sorry! There are no comments related to the filter you selected.

fp? (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10880631)

aybabtu!

From the article... (5, Insightful)

Anonymous Coward | more than 9 years ago | (#10880632)

> Each version of the kernel requires applications to be
> compiled specifically for it.

FUD FUD FUD. No. no no no. NO!. Who writes this generic shit?. There's no truth behind the above statement and it just implies something that is not a problem.

Re:From the article... (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10880658)

Actually it's true when you look at versions under their minor number. Applications compiled for 2.2 probably won't work on 2.4 and definitely won't work on 2.6. You'll be safe compiling on 2.4.16 and using on 2.4.25 though

Re:From the article... (1, Interesting)

Anonymous Coward | more than 9 years ago | (#10880692)

Bullshit. The only thing that breaks is system utilities.

Re:From the article... (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#10880727)

*boots up windows, relaxes*

Re:From the article... (0, Offtopic)

FooBarWidget (556006) | more than 9 years ago | (#10880789)

Until you see a game failing to load in XP SP 2, no matter what you try.

Re:From the article... (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10880802)

Firstly, which game?

Secondly, have you tried WINE?

Re:From the article... (0, Offtopic)

Darren Winsper (136155) | more than 9 years ago | (#10880854)

Freedom Force.

Re:From the article... (0)

Anonymous Coward | more than 9 years ago | (#10880731)

Most people update their distribution to get a new kernel version, and those came with new glibc, new libstdc++ or whatever, causing the incompatibilities we all learned to love.

Re:From the article... (0)

Anonymous Coward | more than 9 years ago | (#10880922)

What are you talking about? Linux and GNU solved library versioning long, long ago. If certain applications have problems then that's the developers fault for not understanding what they're doing. System libraries like Glibc, libstdc++ and libgcc have had versioned API's for several years now.

Re:From the article... (4, Insightful)

boaworm (180781) | more than 9 years ago | (#10880734)

Perhaps he is refering to "Applications" such as the "Nvidia Driver Software" for Linux? That has to be rebuilt/recompiled if you switch kernels, even when switching between 2.6.9-r1 to -r2 etc (Gentoo!).

Perhaps he is not talking about applications such as "Emacs" or "vim" ? (Or, he just finished his crackpipe :-)

Re:From the article... (2, Insightful)

Darren Winsper (136155) | more than 9 years ago | (#10880843)

The only part that needs to be recompiled is the kernel module, and it's not an application, it's a fucking kernel module!

Re:From the article... (5, Insightful)

DanTilkin (129681) | more than 9 years ago | (#10880759)

FUD [catb.org] generally implies deliberate disinformation. All I see here is a clueless reporter. To anyone who knows enough about Linux to make major decisions, Linus Torvalds will fork off Version 2.7 to accommodate the changes should make it clear what's going on, then the rest of the article makes much more sense.

Ours is not to wonder why. (4, Interesting)

Doc Ruby (173196) | more than 9 years ago | (#10880812)

We don't know why this reporter is spreading Fear, Uncertainty and Doubt. Maybe they were misinformed, and lack critical skills required to be a journalist. Maybe they were informed, but are looking for something sensational to get readers (it worked). Maybe they're trying to impress their mother somehow, without even realizing they're making up for a playground trauma from 1983. Who knows? Who cares? They're a FUDder - we're interested in the damage they cause, not the damage that was done to them. That's their problem, unless we propose a massive mental health makeover for the world's journalists. That would probably decimate the ranks of the industry, allowing them to get real jobs.

Re:From the article... (5, Interesting)

elendril (15418) | more than 9 years ago | (#10880800)

You're right: Each version of the kernel doesn't requires applications to be compiled specifically for it.

Yet, where I work, the applications have to be specifically recompiled for each of the three versions of the Linux distribution currently in use.

While it may be mainly the in-house distribution designers fault, it is a real mess, and a major reason for many of the engineers staying away from Linux.

Re:From the article... (1, Insightful)

Reality Master 101 (179095) | more than 9 years ago | (#10880921)

Maybe not the kernel, but one thing that I despise about Linux is the library dependency hell. I can download a binary onto Windows, and it just works. For a hell of a lot of binaries, I simply can't under Linux. I have to recompile the f-ing source for it to link to the right libraries.

Gah, I get irritated just thinking about it. I hate, hate, HATE this about Linux.

Nothing weird in that. (3, Interesting)

Anonymous Coward | more than 9 years ago | (#10880633)

They just got too many weird patches, and had to put them somewhere.
Business as usual.

Oh dear (0, Troll)

Poeloq (704196) | more than 9 years ago | (#10880634)

Its history repeating itself, isnt it? If this happen, its not going to help new members to the community. First post?

Uh-oh (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#10880636)

Forking could potentially be a travesty for Linux. Already there is confusion in the market place with the various distributions (are they different OS's?).

Already there are massive problems with dependencies.

Already there are huge varieties in how a distribution may update itself.

Forking will only makes matters messier.

Re:Uh-oh (4, Insightful)

MartinG (52587) | more than 9 years ago | (#10880662)

Firstly, the article is talking about linux itself, not linux distributions which are another issue and may or may not have "massive problems" of their own.

Secondly, linux (the kernel) already "forks" every time a new development version is opened. ie, 2.1, 2,3, 2.5 etc. All this is saying is that 2.7 is about to open.

"Fork" is not a dirty word.

Re:Uh-oh (0)

Anonymous Coward | more than 9 years ago | (#10880792)

ooh, fork you!

Don't Panic! In large friendly letters... (1)

MsGeek (162936) | more than 9 years ago | (#10880848)

Folks...don't freak!

All that's going on is that the 2.7 development kernel is going to be starting. This is the kernel that is going to be the future 2.8 kernel.

This IS NOT a fork like "I'm going to take my ball and play elsewhere and fork the project." This is "OK, there's a compelling reason to fork off a development kernel, let's do it."

Breathe, folks. Breathe.

Re:Uh-oh (1)

m50d (797211) | more than 9 years ago | (#10880882)

I don't think that's what it is. I think it's saying that they'll fork off versions for really big patchsets they don't want to merge into the main kernel. So there may be 2.6-hard-realtime, 2.6-reiserfs-v4, 2.6-ip-v9, etc. trees. Which is something to be concerned about, although not necessarily a bad thing.

Re:Uh-oh (3, Interesting)

Epistax (544591) | more than 9 years ago | (#10880702)

Already there are massive problems with dependencies.

Tell me about it. When I try installing older programs I get compile errors because the libraries aren't backwards compatible, or ./configure won't be able to find the libraries because the version installed is too new.

I think at some point everyone needs to get together and say OK. Everything from this point on will be compatible with everything from this point on. No more of this crap. One standard installation procedure for every distribution (but each distribution does things its own way). If RPMs are so horrible, then stop releasing everything as RPMs!

Re:Uh-oh (0)

Anonymous Coward | more than 9 years ago | (#10880901)

I've not had any major dependancy problems in years, if you're not comfortable building from source use packages. I'm not trolling but you clearly aren't a developer to be making comments like; "Everything from this point on will be compatible with everything from this point on."

I put things where I want them, If I wanted things spread all over the filesystem I'm quite capable of doing that myself without RedHat and co standardizing it for me!

Huh? (2, Interesting)

laptop006 (37721) | more than 9 years ago | (#10880637)

Of course it will happen, whether it's now or later is a different matter. The problem this time is that several of the core kernel devs want to keep 2.6 under active feature development, and doing that in 2.7 means that they don't get nearly as tested.

But it will happen, and probably this year (or early next).

First post! (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10880639)

That will happen when hell freezes over!

About time.... (4, Insightful)

ThisNukes4u (752508) | more than 9 years ago | (#10880640)

I say its about time to get a real development branch going. I'm sick of 2.6 being less that optimally stable, its time for 2.7 to take the untested patches.

Re:About time.... (3, Interesting)

Rysc (136391) | more than 9 years ago | (#10880689)

I second that. After having the nvidia driver broken four times I'm starting to get frustrated.

And, besides, we're approaching the time Linux kernel's typically fork: a few versions into to the series, the developers are starting to feel restricted by what they can't change in a stable kernel.

I just want to know how crap like this makes it to Slashdot. You'd think Taco would know better.

Re:About time.... (1)

tomstdenis (446163) | more than 9 years ago | (#10880908)

nvidia what?

I ran 2.4 and 2.6 on my P4 [which I sold...] with the nvidia drivers with little trouble. In fact my only problem was the 4kb stack issue.

I'm running 2.6 on my amd64 and the nvidia drivers work fine here [for 64 and 32 bit programs].

Maybe your distro sucks and isn't up to date?

Tom

Is it just me or... (1, Interesting)

A beautiful mind (821714) | more than 9 years ago | (#10880641)

...someone really overused "said" in that article? It got really annoying from the middle of the article.

Re:Is it just me or... (-1, Flamebait)

Anonymous Coward | more than 9 years ago | (#10880730)

Why the fuck is this offtopic. The guy is fucking talking about the article fucktards. Show your fucking moderation badges up your fucking mushroom-infected fuckholes fucking mcmericans!

Yes, of course it will. (3, Insightful)

MartinG (52587) | more than 9 years ago | (#10880643)

The kernel will fork to a new 2.7 branch. This is exactly what happens every iteration of kernel development. This looks like a case of poor journalistic understanding of the usual linux process and/or fear inducing sensationalist headlines.

Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.

Re:Yes, of course it will. (3, Informative)

rwebb (732790) | more than 9 years ago | (#10880709)

"Paul Krill, Infoworld" seems to specialize in breathless, high-anxiety stories about rather ordinary events.

InfoWorld [infoworld.com]
PC World [idg.com.au]

Re:Yes, of course it will. (2, Interesting)

Bralkein (685733) | more than 9 years ago | (#10880738)

Hey, but didn't the kernel developers say a while ago that there would not be stable and development branches of the kernel, and that it would be up to the distributions to look after final stability, or something like that?

In fact, I have just looked up the article [slashdot.org] , and it pretty much says just that. No 2.7 development branch plans! Did they just change their minds or what? Or are they genuinely doing a proper fork of the kernel? I am confused!

Re:Yes, of course it will. (2, Interesting)

nayigeta (792068) | more than 9 years ago | (#10880745)

I am of similar opinion.

The kernel has been forking since its early days - consider the even numbered versions running concurrently.

The key is - will there be enough following and momentum behind each fork to push each fork into the mainstream as a focus and concentration point.

In my mind, for each fork to be successful, it requires some single reliable individual to hand hold things - focal person roles pretty much like those of Alan, Marcelo, and Andrew.

We must not forget that there are also special Linux kernels forked for varying purposes - like firewall, small devices, high availability, etc.

All said, fragmentation - an unhealhty kind of forking, is definitely not desirable.

Re:Yes, of course it will. (0)

Anonymous Coward | more than 9 years ago | (#10880781)

Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.

I'm not sure why, but if you read this last sentence without having imbibed enough coffee it sounds very, very dirty.

I'd Like to Run Linux -- Just No Time (-1, Offtopic)

mfh (56) | more than 9 years ago | (#10880646)

Accomodating large patch sets seems to be a natural progression for Linux. Personally, as a person stuck with XP because I just don't have time to know everything about Linux to install it and keep it running, I would like to see an open source solution for Linux that is completely plug and play. To me, the amount of time required to know Linux is the only thing keeping me away. I imagine a future where I can download a copy of Linux and it would install on my system without any configuration and every option would be through an option menu, like our Slashdot prefs. If this could be a reality today, I would drop XP in a heartbeat.

Re:I'd Like to Run Linux -- Just No Time (-1, Offtopic)

Cheeze (12756) | more than 9 years ago | (#10880657)

you should try out knoppix [knoppix.org] . Let's you run from a cd, and will detect almost all normal hardware.

Re:I'd Like to Run Linux -- Just No Time (1)

nvivo (739176) | more than 9 years ago | (#10880867)

No, it's not it. I understand what he is telling. Although i'm mostly a window user, i've built LFS a couple times, and played with slackware and gentoo for a long time.

We all know distributions are very "smart" detecting hardware and auto-configuring the system. But it's far from what windows provides for the end user.

Take Knoppix for example... Run it in your computer today and it detects everything. Buy a new PCI card that doesn't have it's drivers already compiled on the CD and you are screwed. You have 2 choices: Try to understand how modules and kernel works and how to recompile and load them by hand or wait until the next version of knoppix that MAYBE has the new drivers.

Linux still has this problem: Download Mandrake and never step in the grass, follow all the instructions so you never hurt yourself, or take a full month of FAQs and a great tour at TLDP to understand how to make that sweet little camera you bought work with your distribution that doesn't come with gphoto precompiled.

Same problem for Apps. Wait until your distribution provide a precompiled package or learn to compile things (what is very frustrating for those who doesn't even know how a programing language works).

I know distributions like Gentoo or even Knoppix are doing a great job, but it's not what most "windows power users" want. They can learn how to compile stuff and make it work.

But when all you want is listen to music or see a movie and you discover that your version of "XXX" wasn't compiled with "--with-lib-yyy" and you must manually recompile the thing just because your distribution doesn't support "lib-yyy" by default... Well, you must agree that "download big codec pack -> click click click -> everything works" is a much better option.

Re:I'd Like to Run Linux -- Just No Time (1)

Rysc (136391) | more than 9 years ago | (#10880669)

You can get this today, with the possible exception of your "one big option menu." A big menu with all options doesn't even exist in XP, since the control panel does not control everything.

What you don't explain to my satisfaction is what accepting large patch sets into the Linux kernel has to do with easy Linux configuration.

Changes to the Linux kernel rarely require the user, or even the sysadmin, to learn anything.

Re:I'd Like to Run Linux -- Just No Time (1)

Slayk (691976) | more than 9 years ago | (#10880744)

Actually, you can get the "one big option menu" in SuSE's YaST. I've recently had experience installing that in a lab setting, and it was fairly mindless to install/set up (do you want to format? Yes. Do you want to install? Yep. *hour later* Okay, we're done, here's a big menu that configures everything from now on.)

I personally wouldn't run it (prefer the hands-on touch of Slackware on my own machines), but for use in a lab or for a new user it's quite nice.

Re:I'd Like to Run Linux -- Just No Time (2, Insightful)

MartinG (52587) | more than 9 years ago | (#10880699)

I imagine a future where I can download a copy of Linux and it would install on my system without any configuration

erm.. when did you last try installing linux, and which distro did you use?

I have recently installed ubuntu and fedora 3 on hardware ranging from a fairly old PII 400 with matrox gfx and scsi to an amd64 3000 with radeon 9200 gfx and serial ata, to an ibm thinkpad r40e.

All of these installed with almost no effort and Just Worked. (apart from power management on the laptop which took about 30 mins of googling to find a solution)

I even had hardware accelerated gfx on _all_ of the above machines with no extra configuration of drivers to download or install.

Really, if you want "easy to install and get running" give something like ubuntu or fedora a try. You might be pleasantly surprised.

Re:I'd Like to Run Linux -- Just No Time (0)

Anonymous Coward | more than 9 years ago | (#10880769)

erm.. when did you last try installing linux, and which distro did you use?

I've never tried, but I've been looking into it lately. It doesn't appear to be straight forward enough yet. Maybe I need to put a weekend asside to do it...

I don't like Microsoft's money grubbing attitudes towards users. I don't like the fact that an OS is going to run me $300 CAD.

Really, if you want "easy to install and get running" give something like ubuntu or fedora a try. You might be pleasantly surprised.

Thanks for the input. I'll look into both of these.

Re: Run Windows? -- Just No Time (3, Funny)

Anonymous Coward | more than 9 years ago | (#10880710)

As a person with Linux, I just don't have time to know everything about Windows to install it and keep it running. To me, the amount of time required to know Windows is only one of the things keeping me away. On top of this is all the spyware, viruses, constant severe security flaws, instability and slowdowns that just don't make it a viable system.

I imagine a future where I can buy a copy of Windows and it would work just like Linux. If this could be a reality today, I would maybe consider Windows for some non-technical people.

MOD PARENT UP (0)

Anonymous Coward | more than 9 years ago | (#10880932)

+5 do your own tech support from now on

Re:I'd Like to Run Linux -- Just No Time (5, Insightful)

asciiRider (154712) | more than 9 years ago | (#10880717)

Why is it that every Windows XP user thinks the goal of the Linux community is to convince windows user to make the switch?

Dude - just stick with Winblows. You have no time to "know linux", as you put it, so just stick with what you know. You can post on Slashdot either way.

Please, developers, don't dumb Linux apps/distros down so much that it looks and feels like Windows.

Re:I'd Like to Run Linux -- Just No Time (0)

Anonymous Coward | more than 9 years ago | (#10880751)

Of-course ... Windoz is much to difficult to use.

Re:I'd Like to Run Linux -- Just No Time (1)

empaler (130732) | more than 9 years ago | (#10880771)

Why is it that every Windows XP user thinks the goal of the Linux community is to convince windows user to make the switch?

Try looking at all the other replies to GPs post, and you have your answer right there.

Re:I'd Like to Run Linux -- Just No Time (1)

rduke15 (721841) | more than 9 years ago | (#10880760)

as a person stuck with XP because I just don't have time to know everything about Linux to install it and keep it running

Do you feel that with XP, you just install it and that's it?

Even though I'm quite experienced with Windows systems and with XP, it seems to always need at least about a day of clicking around to get something usable out of it. In fact, when setting up XP for clients, I do bill about a day to set up all the needed applications and do all the required configuration.

I'm not saying Desktop Linux systems are any better. I don't know, I only use Linux on servers, and have no real experience with desktop Linux systems.

For servers, I do feel them to be much easier to set up and configure than Windows servers. If I forget where some option is, I just grep the files in /etc with some relevant word. In Windows, I have to Google for much longer until I find the correct checkbox in some obscure sub-menu of one of the numerous control panels.

Re:I'd Like to Run Linux -- Just No Time (0)

Anonymous Coward | more than 9 years ago | (#10880820)

Do you feel that with XP, you just install it and that's it?

Pretty much. There's also the fact I'm familiar with XP and the Windows way of doing things. I don't like most of the features with XP, and the ones I like could be more intuative.

In fact, when setting up XP for clients, I do bill about a day to set up all the needed applications and do all the required configuration.

I know a guy that does all my setup for me. $20-30 flat no matter what the problem. It's pretty cool actually to just drop my tower off and pick it up with everything done.

Maybe I should have them set up a copy of Linux for me!

Re:I'd Like to Run Linux -- Just No Time (2, Informative)

geg81 (816215) | more than 9 years ago | (#10880881)

I imagine a future where I can download a copy of Linux and it would install on my system without any configuration and every option would be through an option menu, like our Slashdot prefs. If this could be a reality today, I would drop XP in a heartbeat.

Install SuSE, RedHat, or Ubuntu: they are easier to install than Windows XP and come with tons of applications. They even come with excellent printed documentation in case you do need to look something up.

Even easier, buy a PC with Linux pre-installed: you just plug it in and it works.

Utter bunk (5, Informative)

Rysc (136391) | more than 9 years ago | (#10880648)

The Linux kernel forks all the time. 2.5 was a fork of 2.4 when big patches couldn't be merged otherwise. This is all terribly normal, the article was obviously written by an uninformed outsider. 2.6 will fork into 2.7, which most people wont use while big changes are made, and eventually 2.7 will become 2.8, and then for a while there will be one version. Until the next "fork," also known in Linux land as a "development version."

Re:Utter bunk (1)

mabhatter654 (561290) | more than 9 years ago | (#10880735)

from what I read they want to "fork" the next kernel into seperate specialties... i.e. one for desktops, servers, and embedded...perhaps RTOS as well. There are a lot of people out there with great ideas that don't neccessaraily get "approved" by Linus. It's nice to think they can go on as huge patch sets from the "offical" kernels, but we're rapidly hitting the point that the "offical" kernel has to make design decisions that end up specifically excluding other "niche" uses for the kernel...i.e. the heated discussions over RTOS. The designers can support one or the other well, but things good for one type of kernel make the other types suffer performance hits.

Frankly, if it's done right I wouldn't see it as a problem...even Linus would approve because he'll admit all the specialties have grown too much for the central group to focus on. The only real problem is maintaining userspace application level compatibility so the other projects...Apache, php, KED, etc don't have to "choose sides" or then we'll have problems.

Re:Utter bunk (1)

Rysc (136391) | more than 9 years ago | (#10880799)

Specialized markets are already handling this problem: They create a patchset which tracks the latest stable kernel and maintain it independantly of the main tree. I see no big problem with this model. The Linux main tree will continue to work well enough for everyone in general and groups with special (anf conflicting) needs will patch it to their liking without upsetting anybody else.

Why fork 2.6? (3, Interesting)

demon_2k (586844) | more than 9 years ago | (#10880652)

Im not sure if i like the idea. Developers have have lives, that's why the developement is moving at the pace it is. And i like the pace the developement is at. Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.

What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn't be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there's ony one group working on a particular part of the kernel, there would be no repetition. "One fit's all" sort of spreak. One "driver" or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.

Re:Why fork 2.6? (0)

Anonymous Coward | more than 9 years ago | (#10880684)

Congratulations, you have been trolled by the article!

For once, this is a case where RTFA is exactly what you shouldn't do, because the article is steaming goat tripe mixed with its own dung.

Guess what, 2.1 was a fork of 2.0. 2.3 was a fork of 2.2, 2.5 was a fork of 2.4, and 2.7 will be a fork of 2.6, because thats the way things are done around here, since before even 2.0.

So, stand down, chill out, or whatever

Re:Why fork 2.6? (1)

demon_2k (586844) | more than 9 years ago | (#10880716)

Don't you think i know that? I just don't think that 2.6 is ready yet. Each time a new kernel tree is started, it get's all the developers. Then you have to worry about shit like backporting because there's no point writing the code again!

My point is to modify the way the kernel is structured so it's more modular so that code can be used in either kernel tree without modifications. None of this backport shit and then fix bugs because it's not compatible with the older structure.

Re:Why fork 2.6? (1)

NoMercy (105420) | more than 9 years ago | (#10880836)

Why fork it, because people won't upgrade from 2.4 to 2.6 until they can see 2.6 being rock solid, really I think the motivation behind actively developing 2.6 for a long time was driven by geeks and there ideals, as is the usual way, instead of any wish to help industury which refrains from using anything which might be slightly un-stable.

New linux development process (5, Insightful)

Anonymous Coward | more than 9 years ago | (#10880653)

It strains credulity to call the 2.7 linux kernel a "fork" of linux. Every new development version of linux always starts out by forking the old stable kernel. This is how linux 1.3, 2.1, 2.3, and 2.5 all started. It is quite irresponsible for a journalist to proclaim all this doom and gloom over what is in fact a normal development fork in a proven development process.

In fact, out of all the news articles out there about linux 2.7, it seems (not that this surprises me) that slashdot went out of its way to pick one laden with the most possible negative FUD and the least possible useful information about what really is news with 2.7. A much better writeup can be found at LWN [lwn.net] . In summary, the present situation is:

  • The -mm tree of Andrew Morton is now the Linux development kernel, and the 2.6 tree of Linus is now the stable kernel. This represents a role reversal from what people were expecting last year when Andrew Morton was named 2.6 maintainer.
  • Andrew Morton is managing the -mm tree very well. Unlike all the previous development kernels, the -mm tree is audited well enough that it is possible to remove patches that prove to have no benefit (and this does often happen). Bitkeeper is to some degree contributing to this flexibility, although not every kernel developer uses it.
  • The development process is going so smoothly that there may not need to be a 2.7 at all; for the first time in linux development history the developers are able to make useful improvements to linux while keeping it somewhat stable. If there is a 2.7 at all, it will be used for major experimental forks and there is no guarantee that the result will be adopted for 2.8.
There is a story here, but you could easily be forgiven for missing it if you follow the link. The story is that linux development has changed, it is better than ever, and if (not when) 2.7 shows up, it's not gonna be the 2.7 that you're used to seeing.

Re:New linux development process (0, Flamebait)

m50d (797211) | more than 9 years ago | (#10880900)

The development process is going so smoothly that there may not need to be a 2.7 at all; for the first time in linux development history the developers are able to make useful improvements to linux while keeping it somewhat stable.

I call bullshit on this. When there's a 2.6 that is actually useable on my system (crashes less than once every 30 mins) I'll believe that. But I don't think it's going to happen until 2.7 splits off

Idiot. (5, Insightful)

lostlogic (831646) | more than 9 years ago | (#10880656)

The writer of that article is an idiot. The linux kernel forks after every major release in order to accomodate large patches. How did we get to where we are today? Linux-2.4 forked into 2.4 and 2.5 to allow the major scheduler and other changes to be made on a non-production branch. Then 2.5 became 2.6 which was the new stable branch. Currently there are 4 maintained stable branches that I am aware of (2.0, 2.2, 2.4, and 2.6), having a new unstable branch is just the same path that Linux has been following for years. That writer needs to STFU and get a brain.

Re:Idiot. (0)

Anonymous Coward | more than 9 years ago | (#10880705)

"The linux kernel forks after every major release in order to accomodate large patches. How did we get to where we are today? Linux-2.4 forked into 2.4 and 2.5 to allow the major scheduler and other changes to be made on a non-production branch."

I would use "branched" instead of forked. Otherwise your explanation is spot on.

i don't get it (3, Insightful)

Anonymous Coward | more than 9 years ago | (#10880660)

I think that either the writers of this article, or myself are not getting something here.

A couple of months ago there was a general upheavel over the fact that Torvalds et al. had decided not to fork a developement tree of of 2.6.8, but rather do feature developement in the main kernel tree. The message of the article (brushing aside the compiling-applications-for-each-kernel-FUD) seems to be that they have made up their mind and fork an unstable kernel branch of anyway.

What am I missing?

Pretty baseless article. (5, Interesting)

mindstrm (20013) | more than 9 years ago | (#10880661)

No details, no important names.. no nothing.

There are plenty of forked kernel trees out there. Most continually merge in changes from Linus' tree, though.

A fork doesn't matter. What matters is what it represents. If there is enough popularity that the Linux community ends up using incompatable forks, then yes, we have a problem.. but forking in no way necessarily leads to this.

As always, the available kernels in wide use will reflect what people actually want to use.

Even if it was true, it's still a non-event (0)

Anonymous Coward | more than 9 years ago | (#10880714)

It's open source. If someone wants to fork the kernel and maintain it, they can. If they maintain their fork in a way that pleases a set of end users, it will succeed. If it doesn't, it will just be another alternate patch branch like Alan Cox (used to?) produces.

Diversity is good. Nothing to be up in arms about at all.

Cheers,

Re:Pretty baseless article. (0)

Anonymous Coward | more than 9 years ago | (#10880776)

not only baseless but pure moronic.

OMG! linux has had 26 forks in it's kernel already!

an example of a journalist that got the tech job and has no idea what he/she is writing about.

2.7 fork is a great thing! I was waiting for them to stop mucking around in 2.6 so it can stabilize!

hopefully that this will help patrick decide that 2.6 is ready for Slackware 10.1 release that is due in a couple of months.

Beginning of FreeLinux, OpenLinux and NetLinux? (2, Funny)

NoSuchGuy (308510) | more than 9 years ago | (#10880671)


Is this the beginning of FreeLinux, OpenLinux and NetLinux?

What about SCOLinux or MSLinux?

Not yet (1)

multipartmixed (163409) | more than 9 years ago | (#10880807)

We still haven't had Linux 4.3/Reno

Re:Not yet (1)

wed128 (722152) | more than 9 years ago | (#10880925)

And besides, Netcraft hasn't confirmed...

greatest weakness. (1)

ikejam (821818) | more than 9 years ago | (#10880681)

as the author potrays it or greatest strength? (as id like to think) isnt the freedom (as in no constraints, ) to fork inherently a plus of open source? p.s. i hope 'fork' means what i think it means. :)

Run, Chicken Little, Run! (4, Funny)

Minwee (522556) | more than 9 years ago | (#10880704)

Oh no! If this sort of thing is allowed to happen then before long we will start seeing seperate kernel forks for people like Alan Cox, Andrea Arcangeli and Hans Reiser. It could even lead to every major Linux distribution applying their own patches to their own forked kernels.

Then where would we be?

Re:Run, Chicken Little, Run! (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10880798)

Dependency hell?

My favorite quote from the article..... (1)

nullset (39850) | more than 9 years ago | (#10880718)

"Each version of the kernel requires applications to be compiled specifically for it. "

I'm sorry but that's utter bullshite[sic]. I've never had to recompile applications because I upgraded the kernel...... have you?

--buddy

Re:My favorite quote from the article..... (1)

darkmeridian (119044) | more than 9 years ago | (#10880784)

"Each version of the kernel requires applications to be compiled specifically for it. "


I'm sorry but that's utter bullshite[sic]. I've never had to recompile applications because I upgraded the kernel...... have you?


Yes, actually. The nVidia drivers (which broke at 2.6) need to be recompiled everytime you change teh kernel. Wireless driver support under ndiswrapper have to be recompiled each time as well. Yes, these are drivers and not applications, but then, we don't need that doesn't make it less important, does it? This is not all apps, but some important parts of the system do need to be recompiled to work with each new kernel.

Re:My favorite quote from the article..... (1, Insightful)

Bombcar (16057) | more than 9 years ago | (#10880887)

Yes, but drivers are part of the kernel, and so saying you need to recompile the kernel every time you recompile the kernel doesn't say much.

Now the binary parts of those modules mean that the kernel can't autorecompile them for you, but that's not the kernel's fault.

And if fact, the 2.4->2.6 kernel change did require a new version of modutils, and also, you could get improvements to some applications if you recompiled.

Re:My favorite quote from the article..... (1)

m50d (797211) | more than 9 years ago | (#10880914)

I have. Not all my applications, but I do have to recompile svgalib and sometimes some applications which use it. And don't get me started on the cdrtools shennanigans.

A fork (0)

Anonymous Coward | more than 9 years ago | (#10880721)

Isn't a fork a little drastic just to get alsa working agian?

Letter to Editor... (4, Informative)

runswithd6s (65165) | more than 9 years ago | (#10880722)

Here's a copy of a letter I wrote to the online Magazine editor that this article was posted on.
From: Chad Walstrom

Subject: Comment: Is Linux about to fork?
Date: Fri, 19 Nov 2004 19:43:15 -0600
To: kierenm@techworld.com

I'm writing to comment on the article "Is Linux about to fork?" written by Paul Krill, posted on the 18th of November, 2004. Paul really doesn't do his homework, does he? Nor does he understand the development process of the Linux kernel. Linux has ONLY been around for ten years, with a well documented idea behind the "fork" he is speaking about.

Currently, the Linux kernel is at version 2.6.9, with 2.6.10 peeking around the corner. This is the STABLE kernel, the one receiving most of the attention over the last year or so. The kernel eventually always forks to a DEVELOPMENT branch, in this case the 2.7 branch. Is Linux about to fork? Yes! Does this have any correlation to the Unix idea of forking? No!

Kernel-Trap.com covered the recent possible changes to the Linux Development Model in http://kerneltrap.org/node/view/3513. In general, forks are good things in the Free Software environment; it's part of life.

For a straight FAQ Q&A style of answering the question: http://www.tldp.org/FAQ/Linux-FAQ/kernel.html#linu x-versioning

Q: How Does Linux Kernel Versioning Work?

A: At any given time, there are several "stable" versions of Linux, and one "development" version. Unlike most proprietary software, older stable versions continue to be supported for as long as there is interest, which is why multiple versions exist.

Linux version numbers follow a longstanding tradition. Each version has three numbers, i.e., X.Y.Z. The "X" is only incremented when a really significant change happens, one that makes software written for one version no longer operate correctly on the other. This happens very rarely -- in Linux's history it has happened exactly once.

The "Y" tells you which development "series" you are in. A stable kernel will always have an even number in this position, while a development kernel will always have an odd number.

The "Z" specifies which exact version of the kernel you have, and it is incremented on every release.

The current stable series is 2.4.x, and the current development series is 2.5.x. However, many people continue to run 2.2.x and even 2.0.x kernels, and they als o continue to receive bugfixes. The development series is the code that the Linu x developers are actively working on, which is always available for public viewing, testing, and even use, although production use is not recommended! This is part of the "open source development" method.

Eventually, the 2.5.x development series will be "sprinkled with holy penguin pee" and become the 2.6.0 kernel and a new stable series will then be established, and a 2.7.x development series begun. Or, if any really major changes happen, it might become 3.0.0 instead, and a 3.1.x series begun.

Re:Letter to Editor... (2, Funny)

Anonymous Coward | more than 9 years ago | (#10880819)

Eventually, the 2.5.x development series will be "sprinkled with holy penguin pee" and become the 2.6.0 kernel

And THAT ladies and gentlemen is what seperates Linux kernel docimentation from the standard IBM documentation.

Re:Letter to Editor... (1)

genghis khant (831226) | more than 9 years ago | (#10880898)

I think your post is just about the last word on this. Lucid enough even for mainstream hacks.

It is Linus's fault. (4, Insightful)

IGnatius T Foobar (4328) | more than 9 years ago | (#10880742)

Hold on, take this into consideration before you hit that "flamebait" button. I'm responsible for a large number of Linux systems at a hosting center, and this is our single biggest complaint:

There needs to be a consistent driver API across each major version of the kernel.

A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

The current situation is completely ridiculous. Anything which requires talking to the kernel (mainly drivers, but there are other things) needs either driver source code (watch your Windows people laugh at you when you tell them that) or half a dozen different modules compiled for the most popular Linux distributions. These days, that usually means you're going to get a RHEL version, and possibly nothing else. What happens when you're competent enough to maintain Fedora or Debian, but you don't have driver binaries? (Yeah I know, White Box or Scientific, but that's not the point.)

In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers. So what? They're going to release binary-only drivers anyway. Others will simply avoid supporting Linux at all. LSB is going to make distributing userland software for Linux a lot easier, but until Linus grows up and stabilizes the driver API, anything which requires talking to the kernel is still stuck in the bad old days of 1980's-1990's. Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.

Re:It is Linus's fault. (0)

Anonymous Coward | more than 9 years ago | (#10880859)

That would be nice...

What would be even nicer is if hardware manufacturers would actually test their drivers on the hardware before it goes out the door. And by "test", I mean a quick check to see if the driver CD shipped actually contains drivers for the hardware it is shipping with. Certain major motherboard manufacturers are particularly bad at this, and I'd be happier if they just stated they didn't support Linux at all. Then we could either ignore the hardware or at least be aware that we'd have to write our own drivers.

We don't really need binary drivers, because recompiling your kernel/drivers isn't really that big of a deal, folks...but it would be nice if the manufacturers started us off with some source.

Re:It is Linus's fault. (3, Insightful)

geg81 (816215) | more than 9 years ago | (#10880866)

A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.

That's deliberate...

In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.

... and that's the reason why. If it were easy to use binary drivers, more and more drivers would become binary. For making Linux distributions easier to manage, it would be nice if binary drivers were easier to manage and distribute for Linux. But the fact that that would make distribution of binary-only drivers easier is considered a disadvantage by many.

Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system. You have chosen the second option, so deal with it.

Re:It is Linus's fault. (1)

grumbel (592662) | more than 9 years ago | (#10880930)

### Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system.

The problem is that even IF you by from a OSS friendly hardware vendor its still a PITA to get a driver running if it happens to be developed outside the Linux kernel. Little example, my graphics tablet is perfectly supported under Linux, by both the kernel and XFree86, however the driver that came with both the kernel and XFree86 had a bug, so I had to download and recompile around 70MB of source code just to get a few Kilobytes of binary driver that would run fine with by version of the kernel and xfree86. It did cost me almost two days to figure out and get it up&running completly, this really shouldn't be acceptable these days and is completly unnecessary.

Weird article. Why did this make it to slashdot? (0)

Anonymous Coward | more than 9 years ago | (#10880748)

A 2.7 kernel fork is something that is expected to happen and is natural for Linux.

Oh, no!!

We don't want to happen with 2.6 what happenned with 2.0 and 2.4 with the 2.1 and 2.5 kernel forks!!

Now for any Windows people that don't have much of a clue about how Linux works, here it goes:

You can tell what kernel you need to use by it's numbering sceme. It goes like this:
Linux X.X.XX

The first X is the current generation of kernels. The current generation is 2.

The second X is a series number. Even numbers indicate a "stable" kernel. Odd numbers indicate a "developement" kernel.
2.1 2.3 2.5 were all developement versions with lots of changes thru their lifetime.

2.0 2.2 2.4 2.6 are stable kernels and they are mostly static except for performance tweaks, bug fixes, and driver support.

2.7 kernel will be the new development kernel. By forking it the kernel developers would indicate that they have a idea of what they want from 2.8, are willing to start another stage of radical/fast developement, and considure 2.6 series kernel to mostly unchanging from here on out.

The fork would be a good thing.

The last XX's indicate a revision of that series of kernel and do not indicate anything beyond that.

2.6.9 is the later version of 2.6 compared to 2.6.2, that's all.

A large patch set? (-1)

SkunkPussy (85271) | more than 9 years ago | (#10880750)

WTF is a large patch set?

nicorette for fat people?

Strength of Linux (2, Interesting)

b0lt (729408) | more than 9 years ago | (#10880822)

I believe that the strength of Linux comes from the fact that it has a central core, which is compatible with basically everything across distros. This makes faster development, and combined with a charismatic leader (Go Linus! :o) makes a very strong platform for an OS. These are my personal beliefs, so feel free to flame me if you disagree :)

-b0lt

Irresponsible (2, Interesting)

Craig Maloney (1104) | more than 9 years ago | (#10880823)

I'd imagine any article that talks about Linux Forking would have at the very least grabbed one or two quotes from Linus before going to print. Linus is only mentioned once in the article, and that is a passing reference as the owner of the Linux Kernel. And while Andrew Morton may have mentioned what was going on in the interview, the reporter made sure it didn't show up in the article. Irresponsible.

Is it my imagination... (1)

wa1hco (37574) | more than 9 years ago | (#10880832)

or is that article completely incoherent? It doesn't seem to match what Andrew Morton said.

Forking is a bad idea??!??! (0)

Anonymous Coward | more than 9 years ago | (#10880834)

As far I as understand, Xorg was a fork too right? I haven't seen anybody complaining lately.

kernel panic (2, Informative)

Doc Ruby (173196) | more than 9 years ago | (#10880837)

The reporter says that some developers have made big changes, in different directions, to their copies of the kernel source that Linus won't accomodate in a single encompassing kernel. Like desktop and server versions. So he'll have to fork it. Why forking the kernel, rather than just the magic "#ifdef DESKTOP_KERNEL_" that keeps all the manageability of a single kernel source version, is the solution, is not addressed. Combined with the rest of the bad logic and information reported in the article, this is just journalistic kernel panic, and probably not a real issue for the kernel. At least the fork - divergent execution scenarios are a valid issue for maintainers. But there are so many ways to manage source control that punting with a fork seems desperate, and unlikely.

Re:kernel panic (1)

m50d (797211) | more than 9 years ago | (#10880928)

Have you ever tried to maintain code with the amount of ifdefs that linux may need to use? A code-folding ide helps somewhat, but it's still horrible. When more than about 1/3 of the source is ifdefed, it's time to fork.

My response: (3, Funny)

Performaman (735106) | more than 9 years ago | (#10880841)

What the fork?

We'll just have to hope then :) (1)

espenfjo (690929) | more than 9 years ago | (#10880845)

As many of you might have noticed I started a thread on LKML a couple of weeks ago(My thoughts on the "new development model"), so I am really hoping such a fork will happen. :)

Not at all like Unix (1, Informative)

Anonymous Coward | more than 9 years ago | (#10880852)

"In a worrying parallel to the issue that stopped Unix becoming a mass-market product in the 1980s - leaving the field clear for Microsoft."

As long as everything stays open source, this won't be a problem. When you get an application now it is likely as a binary for a particular distro. I don't see that changing. You will still run urpmi or apt-get and for most people, things won't change. Really, how would forking create a different situation than we currently have with Gnome/KPI?

Unix in the 80's was not open source and that makes all the difference.

News in disguise ... (2, Interesting)

foobsr (693224) | more than 9 years ago | (#10880865)

... [slashdot.org]

erm ...

"We all assume that the kernel is the kernel that is maintained by kernel.org and that Linux won't fork the way UNIX did..right? There's a great story at internetnews.com about the SuSe CTO taking issue with Red Hat backporting features of the 2.6 Kernel into its own version of the 2.4 kernel. "I think it's a mistake, I think it's a big mistake," he said. "It's a big mistake because of one reason, this work is not going to be supported by the open source community because it's not interesting anymore because everyone else is working on 2.6." My read on this is a thinly veiled attack on Red Hat for 'forking' the kernel. The article also give a bit of background on SuSe's recent decision to GPL their setup tool YAST, which they hope other distros will adopt too."

CC.

Is Mr. Krill some sort of AI? (4, Funny)

Maljin Jolt (746064) | more than 9 years ago | (#10880870)

I have read the article three times but still it looks to me like a random collection of irrelevant sentences unrelated to each other. Maybe it would make more sense if Paul Krill himself was written in lisp, or drank less if he's a real biological entity. This article looks like a random google cache copy and paste made in php.

Re:Is Mr. Krill some sort of AI? (1)

foobsr (693224) | more than 9 years ago | (#10880916)

if Paul Krill himself was written in lisp

Maybe a test case on the basis of a (poorly) semanically loaded web using PROLOG ?

CC.

There's always GNU Mach (0)

Anonymous Coward | more than 9 years ago | (#10880897)

Use GNU Mach and SFTU.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>