Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

2.6 and 2.7 Release Management 173

An anonymous reader writes: "A recent discussion on the Linux kernel mailing list debated whether the upcoming 2.6 and 2.7 kernels should be released at the same time instead of first stabilizing the 2.6 'stable tree' then branching the 2.7 'development tree.' The theory behind the proposition is to keep "new" things from going into 2.6 once it is released, focusing instead only on making it stable. On the flip side of this argument is the possibility that with a 2.7 kernel in development, there will be too little focus on stabilizing the 2.6 kernel. The resulting debate makes for an interesting read."
This discussion has been archived. No new comments can be posted.

2.6 and 2.7 Release Management

Comments Filter:
  • this is silly (Score:3, Insightful)

    by edrugtrader ( 442064 ) on Friday July 12, 2002 @08:05PM (#3874458) Homepage
    there will always be a kernel in development and one being stabalized.... its a wash either way.

    i would recommend the stabalization of 2.6 before the branch of 2.7 (the initial arguement) and i think the flip side is incorrect... just because 2.7 is 'in the works' doesn't mean that the 2.6 hackers are going to take a nap on their work
    • by brunes69 ( 86786 )

      A large reason for the awful VM mess that 2.4 was in around 2.4.8 - 2.4.11 or so was largely due to the fact that a totally new VM was just kind of "thrown in" to the "stable" branch, probably mainly cause there wasn't a 2.5 branch yet at that point (as I recall). This is the sort of thing that branching earlier would hopefully prevent. While the stable branch may not have some of the "bells and whistles" it could have gained from keeping the branches together, at least hopefully a mess like that can be avoided.

      Then again, that's just my opinion :)

      • Actually that was more of a problem with the old VM not being stable enough and Linus taking a risk on a new one.
    • i would recommend the stabalization of 2.6 before the branch of 2.7

      But that makes no sense why would we have a 2.6 release if it is not yet stable? What happened in 2.5 then? Why was it changed to 2.6 if it was not stable?
      2.6.0 should be a STABLE realease right?
      • 2.6.0 should be a STABLE realease right?

        Yes, you are right, 2.6.0 should be a stable release. And hopefully it will at least be a stable release for all the people that developed and tested 2.5.x. But it is practically impossible to make a piece of software without bugs, so when 2.6.0 is released there will still be bugs to fix. When the version number changes from 2.5.x to 2.6.0 more people will start using it and consequently remaining bugs will be found faster.

        Of course I hope 2.6.0 will not be released with known bugs, and will be intensively tested before release. But that is the best we can hope for.
  • Keeping the kernal stable is much more important than adding more features. That's why most of us have migrated to linux in the first place.

    If I don't have to reboot in 6 months or beyond, then I'm happy with that.
    • You mean this is why most of us have migrated to FreeBSD in the second place :)
      FBSD has both concurrent stable and experimental development. The stable configuration includes kernel and userland source and is very suitable for production environments when non-security related change is almost never desired.
  • A Good Thing... (Score:4, Interesting)

    by phraktyl ( 92649 ) <wyattNO@SPAMdraggoo.com> on Friday July 12, 2002 @08:07PM (#3874470) Homepage Journal
    I would release them at the same time. Just as now, with 2.4 and 2.5, there are people who are very good at stabilizing current code, and people very good at developing new code. Some folks can't stand working on new things when the old need work, and vice versa.

    I see this as having two benifits. First, it will help with the ``Most things work pretty well---let's go ahead and release it.'' attitude. The 2.4 series has only recently gotten stable enough to reliably use in a production environment, and not everyone agrees on that even.

    Second, it will allow people to focus on what they are good at. The 2.6 series will mature much faster without adding new features in every release. Sure, there are bound to be a few gotchas, but if the focus is on stabilizing the code, they will be out by the 2.6.3 or 2.6.4 release. At the same time, people will be adding to 2.7, which should mean that there is much less time between stable kernel series releases.

    I'm all for it!

    --Wyatt

    • Also it will probably mean that it will not be such a long time between major releases...
    • The Linux kernel needs a severe restructuring of it's development model to stay afloat. Bad Things Will Happen (TM) if we keep on like we are now. Kernel differences between distributions are making it even worse. Nevermind becoming a viable business/corporate solution, let's fix what's wrong before it's too late!
      • bah ... no it doesn't ... sure somethings could be done better - but when it comes to kernel development as a whole, its already broken about every rule to 'proper software' development ... and it's doing ok so far.

        The difference between kernel distributions is not really a big deal. Maybe in a close source system it would be ... but as long as everything stays open, its no big deal. I can grab the source to the latest redhat kernel and compile it on a debian system. That's what makes linux so great in the first place.
        • We had huge problems a while back, they still pop up due to this. We all of a sudden had to start dealing with some HUGE (multiple GB apiece) files. It was a nightmare determining which kernels had ext3 support in them, which distros/patchlevels had all the correct utils for ext3 support, whatever. And people had done exactly what you described, recompiled their own kernels with god-knows-what kind of configs, switched distros..Ugh.

          UnitedLinux is a step in the right direction, but something more granular is needed.
    • No, the two versions sholdn't be released at the same time. Close, perhaps, but not the same.

      It would probably be ok if there were, say, 2.6 were released no more than a week before 2.7, but I'm a bit dubious. OTOH, I don't know just how stable 2.5.99999 is.

      It seems to me that the plan used last time worked out well. Not perfect, but well enough that I wouldn't want to tinker with it. Release the new version as current, and then wait a bit for more problems to show up. "We won't find any more problems unless we get some new testers in, so release it now." (LT being paraphrased.)

      That was the right time to release it. Then the next round of debugging happened. Then the 2.5 branch was forked off.

      Perhaps, as an intermediate step, submissions for the 2.7 branch could be accepted, and placed in a queue for evaluation after 2.6 was released.
  • Isn't this a problem faced by most software projects? Stabilize resent release (bug/feature fix) or move team to next version/revision.

    I would recommend dividing the team up.
  • by Hrunting ( 2191 ) on Friday July 12, 2002 @08:11PM (#3874493) Homepage
    See, to me when someone calls it a "stable release", that means it's already been stabilized. Sure, you're going to have the occasional bug fix here and there, but actual "stabilization" should've been done in the 2.5.99 range, ie. the previous development branch. Once the stable tree is released, there shouldn't be a need to stabilize it and branching the new development tree right then makes sense. There should not be an "development" per se in the stable release after that, only the occasional maintenance.

    If the kernel maintainers would just grasp this one simple point, maybe this issue wouldn't be one, and maybe people wouldn't laugh at the .0 release of the kernel.
    • What are you talking about, "not stable"...?

      Linux doesn't crash. It can't; it's simply not possible... Slashdot told me so. They said that Linux crashing would defy the laws of physics, or something.
    • A bazillion more people test the kernel after it hits the .0 mark and so of course mistakes are going to show up.

      If you need stable kernels you should get them from Linux vendors like Red Hat and Suse etc. That's the way it has been for a long time.

      • RedHat and SuSe and other distro vendors do patch their kernels, and sometimes these patches are for stability and security reasons, and are good to have. Sometimes they also royally fuck things up.

        A while back a patched RedHat kernel had a pretty bad bug in it that caused hard locks in X with ATi chipsets (2.4.9-something, I think). This particular bug was specific only to the RedHat kernel.

        So, maybe the distro vendors fix some bugs, but they also introduce bugs of their own. I've used both stock kernels and distro kernels in production environments and haven't noticed particularly more or less bugs in either type.
        • The difference really is not in the patches that they add but the testing that they do. Some of the stock kernels are very bad. They might not compile for example.

          You're probably right that there is not always a lot of difference between stock kernels and vendor kernels. But I always tell people to only use vendor kernels, because if they break then the people can blame Red Hat or Suse but don't hassle the developers.

          The post I was replying to was belly aching about .0 releases and thus falls under the "hassling developers" catagory. I'd be willing to bet that Red Hat and Suse didn't ship with the .0 version because they knew it wasn't trusted.

          Mandrake may have shipped with it... They like to live on the edge.

          But yes. You're right. There is nothing wrong with using stock kernels in production. I believe that Debian only uses stock kernels.


    • When tons of people starting using 2.6.0, they will find new bugs. These bugs would also be in 2.7.0. Both kernel branches would need to be fixed BY HAND, increasing dev and testing time. I think it makes more sense for 2.7.0 to start with a "stable" stable 2.6.x.
      • What do you mean "by hand"? Surely whatever they use for source control allows changes to simply be migrated from one branch to another.
        • I don't think linux has source control. you'd need visual source safe for that.
    • It's a catch-22 (Score:3, Interesting)

      Kernels don't get truly stable until you get thousands of people using them, but all those thousands of people aren't going to install a kernel until it's deemed a stable release.

      Release candidate kernels help alleviate this somewhat, but you can never really duplicate what happens when the bulk of normal users stand using it on an everyday basis.
    • Stable refers to the interfaces, more than the product stability.

      It's good when a "stable" kernel doesn't crash, but that's not actually what the word means. Look at Mozilla, it was stable, in that I often had 20 windows open for weeks at a time in Win2k (yeah, Win2k having an uptime of weeks, but really, it happened...) and Mozilla wouldn't crash once. But they didn't call it 1.0 until they stabilized the interfaces so you could use pluggins and addons without having to upgrade them for every minor update.

      It just happens that when you're adding functionality you often break backwards compatibility (hence, unstable interfaces) and make things crash (unstable in the other sense.)

      It's like 'Free', it's got multiple meanings. Linux 2.(even) releases are stable in the sense of unchanging. Releases that don't crash are stable in the meaning we normally use.
    • If the kernel maintainers would just grasp this one simple point, maybe this issue wouldn't be one, and maybe people wouldn't laugh at the .0 release of the kernel.
      The problem is that "stable" can't be added as a feature; it has to be bought with testing (ergo bugfixing), and you run into a chicken-and-egg problem trying to get people to test stuff that isn't "stable" yet. Each new level of supposed trustworthiness (2.3.x -> 2.4.0pre -> 2.4.x -> distro releases) brings orders of magnitude more users, and inevitably uncovers oodles of bugs.
      "Stable" refers to the halfway-frozen API in even-numbered releases; it doesn't mean 2.4.x is not expected to crash. Stability is empirical.
  • The backport concept (Score:2, Interesting)

    by jhines ( 82154 )
    From the bsd world, there is the concept of "backport" which is where a feature in the development kernal is ported back to a previous stable kernal series.

    Great for bug fixes, and other things in the middle ground.

    Certainly if there is interest, a set of patches to a stable kernel, or even another -someone kernel series can be developed. If these turn out to be in demand, and stable enoug, they can be officially included.

    • This "backport concept" is what made early 2.4 kernal revisions a total nightmare!
    • Backporting has been a stability problem and a source of great debate in the community.

      First. The kernel is pretty damn stable. The instability people talk about are usually extreme cases or performance that is less than optimal. I have yet to see a 2.4.8 or later kernel lock up or panic on my typical hardware and I've got 3 machines running 24x7. I don't count the first 7 cuts becuase I was doing development on them and going to great pains to track them and they collectively should have been maybe to of the last 2.3.99 releases.

      Linux is large. There are hundreds of regular contributors as well as a number of companies doing stuff. 2.3 lasted way to long and so everybody wanted to get their stuff in to 2.4 because 2.6 could be 2 years away after 2.4 came out. That was a mistake. The kernel underwent a lot of change during 2.3.99 and people were still adding tons of stuff. There were bitter fights about what should go in and when. You hate to be SGI, spend hundreds of thousands of dollars (I'm guessing that's the man hour cost) porting SGI and not make the cut and then wait 2 more years.

      At the same time the releases need to be tempered. People say 2.0.40 is a bad sign because it needed 40 patches. We could be on kernel 4.2 now if we made the releases closer and that just creates more confusion in a lot of ways also. 2.4 took way too long and too much happened. 2.6 will be much better and the community needs to see that and get used to 6 to 9 to 18 months for a major release. Companies need to understand that and make their investments accordingly. It's difficult though because there are so many independant people developing stuff they are planning to get in and it all can't go to Linus when he says he's getting ready to lock it down.

      I think if anything, maybe 2.7 should branch off before 2.6 is cut. Linux and the team and go through the big items and determine where and when and then make some timelines accordingly. You give people working on the bigger things a place to put their stuff. Then the final 2.5 releases shouldn't be as rushed.

      • In the case of BSD, many of the changes are for other than kernel items, which doesn't apply to Linux.

        For the kernel itself, it would have to be fairly minor changes, or possibly stub or future compatiblity reasons.

        I certainly agree that the average administrator needs to be able to rely on as stable.

        Again, the most common need would be a bug that affected both stable and development versions.
  • by Anonymous Coward on Friday July 12, 2002 @08:18PM (#3874525)
    The linux kernel,. besides stability .. what sort of things do they want top add/improve?

    better networking? better I/O performance?

    what about multiple CPU support?

    The most important thing for me would be resource management features .. such as being able to allocate how much CPU, memory, or disk space a particular user or process can use. These are things that solaris has had for a long time .. and it seems that linux kernel developers arent interested in adding those features .. how can linux hope to take over the enterprise server market without it?

    Does anyone have any info on what's happening in the area of adding resource management features to the linux kernel?

    Actually any info on what cool features they are working on for future releases would be appreciated
    • When it comes to how much resources a particular use can, well use, PAM is your friend. It can limit data, ram usage, CPU time, number of processes, and user priority.

    • The linux kernel,. besides stability .. what sort of things do they want top add/improve?

      There is a list of the new features in 2.5 here [kernelnewbies.org].

      In summary:

      Performance
      • Major rewrite of the disk IO layer meaning better harddisk performance for Joe-user and high-end database servers as well
      • New and faster scheduler
      • Pre-empt scheduling for better interactive performance
      Features
      • ALSA sound infrastructure
      • Video for Linux redesign
      • ACPI interface and other power-control patches. Especially a new software suspend-to-disk feature that does not involve Windows specific BIOS magic.
      • Lots of high-end features (High memory, 64 bit processor support, per-CPU infrastructure, hot-swap CPU etc.etc.)
      • JFS - Journaling filesystem from IBM (where's XFS?)
      • Bluetooth
      • USB-2.0
      Security
      • Access Control Lists (ACL) which gives fine-grained security
      • Per-process namespaces (some Al Viro hackery. Someone please tell that man to slow down a bit)
      • Plugable quota system
      And as usual a lot of new driver updates.

      Suspiciously missing are any memory management patches (although Rik has his reversed mapping patch in the pipe). Perhaps the topic is still a litte too hot... ;-)

      The most important thing for me would be resource management features .. such as being able to allocate how much CPU, memory, or disk space a particular user or process can use. These are things that solaris has had for a long time .. and it seems that linux kernel developers arent interested in adding those features .. how can linux hope to take over the enterprise server market without it?

      I think that the with the current kernel you can already do much of this. But some of the new features of the 2.5 kernel allows for much more fine-grained control - like binding a process to a distinct CPU, better quota accounting etc. Perhaps thats what you're looking for ?

      The direction of the 2.5 kernel seems to me to be mainly (but not exclusively) targetting enterprise systems.
  • by Nighttime ( 231023 ) on Friday July 12, 2002 @08:20PM (#3874532) Homepage Journal
    ...would be a good idea IMHO if this kept Linus away from working on the stable branch.

    Look at what happened with 2.4, we had the change to VM, 2.4.11 which needed immediate patching and is tagged as dontuse, 2.4.13 similar problems, 2.4.15-greased-turkey released by Linus for Thanksgiving and a nice syncing problem.

    When it comes to deciding what is and is not allowed into the kernels the buck stops with Linus. This is why I think Linus should stick with the development kernels where a major change can have all its kinks worked out in relative safety. The stable branches should be maintained by someone who only has authority to accept and apply bug fixes.
    • Wow... what skill. You condensed like 8 pages of debate into 3 short paragraphs. And this is the best point I've heard so far -- the guys on the mailing list are rather averse to saying that linus couldn't maintain a kernel to save his life, so just get {alan, marciello, et al} on stable as soon as it comes out, and branch the dev one shortly thereafter.
  • Slashdot (Score:3, Interesting)

    by Catskul ( 323619 ) on Friday July 12, 2002 @08:32PM (#3874582) Homepage
    Most of our opinions on this really dont matter. I have run unstable kernels for a long time and never had any trouble with them. The only time it is really an issue is with production servers, which most of us dont run. I think those of us who dont run production servers should refrain from submitting our opinion and leave the line clear for those who it really affects.
    • The only time it is really an issue is with production servers, which most of us dont run.

      I want and need for my kernel to magically just run. Yes, I'm a Debian maintainer running unstable on my own machines, but kernels are neither my forte nor my interest.
    • All our opinions matter. It is just that these are not all heard where they need to be. Even if we don't run production level servers would we not want to see the kernel development work a little smoother and not have major problems arise in a stable release, such as the VM in kernel 2.4.11.

      Isn't the kernel made by the people for the people. I think we should all voice our opinions and those that get heard great, maybe they will make a difference.

      I for one would like to the see the development kernel be forked as soon as possible and let Linus continue his work on that, and have the new stable kernel maintain to the point it is stable and is being fixed of bugs.

      • I think they are heard, and considered, where it matters. Slashdot was the first place I saw a big clamor for the "instant branching" that is proposed here.

        Even if Cox or Linus isn't reading regularly, it helps ideas get mindshare with lots of smaller players, who can spin the debate threads on LKML toward one side or the other.
        That doesn't mean Linus et al will agree, but his judgement in the past has usually been pretty good, which is a good thing, democracy is not a particularly good method of software engineering.
    • You are right in a way. The arguments about kernels don't affect most of us, though we think it does (me, too, even though I know better).

      The thing is, most of us run the kernel that our distribution provides us. Most of us wouldn't really gain much be doing otherwise (though USB 2 sounds quite interesting). And we choose our distribution based partially on how stable we want our system to be. There seems to be a kind of order that runs roughly...
      Debain-stable, Red Hat, SuSE, Mandrake, Debian-unstable, other

      This is grossly oversimplified, as stability isn't the only variable here, but the people for whom stability is more important cluster toward the left, and those with other priorities cluster toward the right.

      And what we are talking about is stability of the distro, not of the kernel. The kernel is a small part. (For the more experimental people, the kernel may not even be the one supplied by the distro.)

      What most people are really doing is dreaming of the fabulous "next release" when all of the unnamed marvels will be given to them. It never happens, though incremental improvements happen all the time, and there's always lots of new eye-candy.

      I know this is happening, and even so it still happens to me. Watching it happen makes me feel silly, but it's fun! So I just don't take it too seriously. I'm just glad that I lust after new software more than after Ice Cream! (I already have enough troubles with that).
  • Just my thoughts (Score:4, Insightful)

    by young jedi ( 311623 ) on Friday July 12, 2002 @08:47PM (#3874627)
    If 2.7 begins before 2.6 is stable aren't we in danger of seeing a win9x syndrome in that bugs will live for ever and instead of being fixed they will be coded around. I fear very much the long term affects on the kernel and in turn Linux if the trees are split prior to a stabilization period. I am a developer, not on this level, but I have seen the affects of splitting a code base simply to continue developing and at the same time trying to patch existing "production code" and then port things back and forth. It is a very bad idea!! Usually what happens is things don't get back ported they are only provided doing a major upgrade, again the microsoft way of bug fixing.

    Granted, you will always have some cross patching, however I think the idea of building off of a clean base is very important. For example, you would not put new tires on your car if the engine is not running, right?

    Essentially, I think the issue here is one of knowing the base is clean versus drudging on in the dark despite the fact that you have been offered a lantern.

    To put this most bluntly I would call this Microsoft syndrome. As I said before win9x is the perfect example of a system that was never stabilized rather it was constantly released to the unsuspecting public as upgrades which where really bug fixes and the monkeys went back to the keyboards never addressing issues raised by numerous consumer requests on the so called production release because the devel team would rather work on that new feature because it is more interesting than maintaining the existing code base.

    I am being harsh here I know, but I am trying to view this in the long term. I feel that this would weaken the kernel and as I said weaken Linux which would in the end at least decrease corporate trust in the stability of Linux or at worst give M$ what it wants, Linux's death,

    Maybe I am extreme, feel free to beat me but I know you have to have a clean starting point before you can move forward otherwise you will constantly be taking steps backwards which eventually leads to stagnation and death.

    Just my thoughts
    • If 2.7 begins before 2.6 is stable
      Heh! 2.6 should be a polished to extremely stability 2.5! Not anyhow else! So it should be as stable as possible, unless it's not worth changing from 2.5 to 2.6.
      Hmm. May be a good idea to put something like
      2.5pre? People will know that it's a kernel that
      is going to be stable and will test it as nearly stable?
      • You maybe right, but I have never seen the first release of a major be perfect, in any software written by anyone! That is why .01, .02, and so on come out so fast right after the initial release.
      • I believe that there's some confusion here.

        I have heard that:
        A stable version release is one that has specified interfaces that don't change during the minor version releases.

        This isn't the same as a bug free release, though it does imply that a certain kind of bug has been fixed.

        Bug free releases don't happen. They aren't going to happen. And nobody expects them to happen. During the development process of changing the interfaces, attempts are made to avoid introducing new bugs, and to fix the ones that are introduced. These aren't totally successful. Don't expect it.

        Actually, until the interfaces have been frozen you can't really fix all the bugs. It really isn't possible even for perfect programmers. So only the really daring would even think of using 2.6.0 for anything that couldn't be reconstructed instantly. When inherrent problems are found with an interface in a major version, the only solution is to work around it until the next major version is ready. And you can't really know ahead of time. All you can do is try to get a wide variety of people to test it. And you can never get a wide enough variety of testers to give good reports. Guaranteed.
        • I agree. This is why I want to re-introduce the system of "pre-s"
          2.5.xx should be the last development version,
          2.5.xxPre-1 should be freezed version
          2.5.xxPre-2 should be freezed version with some bug fixes
          2.5.xxPre-N should be version where almost everybody agree that it's a stable version with 99.99% stability
          2.6.0 will be first production stable kernel
          2.6.1 will be fixing an unexpected bugs, etc.
          2.7.0 will be next development kernel, introduced
          when 2.5.xxPre-1 issued
    • No. The same problems don't happen for many reasons. One is that because the stable and unstable version are worked on concurrently, you still have fixes done to the stable kernel. For those that want stability, use 2.(even) for those who want to make new feaures or manke an applicationt that uses new features put on a 2.(odd) (note: 2.(odd) really is just for developers. The features aren't anything that a user could want, because there aren't applications that use them yet (thus the developers using it)).

      Secondly. It's important to know that when you get down to the distribution level, that there are still backports to 2.2 (possibly even 2.0, but I don't know. I use a 2.2 kernel). This is why you'll see a version like 2.2.18-23. It means it's the 2.2.18 kernel from the linux kernel guys, with 23 revisions that have backported bug fixes (the same thing applies to other packages). This is really one of the strengths of open source and shouldn't be taken lightly. It is one thing that sets linux apart from the MS line.
  • I really don't give a shit how they manage releases.... AS LONG AS IT WORKS.
    • Come on, this is Slashdot! We don't want discussions about managing open source software projects, geesh...

      joe.

      ps. not a flamebait, but genuinly funny.

  • by Anonymous Coward
    Release version 2.71828 and call it quits.

  • by renehollan ( 138013 ) <[rhollan] [at] [clearwire.net]> on Friday July 12, 2002 @09:04PM (#3874684) Homepage Journal
    Look, no matter when you release 2.6, you will want to change it. So, if you never want to change it, you will never release it.

    So, the best time to let 2.6 "escape" is when you're fairly confident it's "ready" and won't need patching.

    Of course, you'll be wrong -- it will need patching, or backports of useful features that just didn't make it in time.

    But, the idea is that these patches or backports should be trivial "oopses" where the change does not require massive code review, or the backport is clearly something that was "99% done" already.

    So, my suggestion is release 2.7, and hold off on release 2.6 until the obvious release-related "oops"es are found, say 1-2 weeks, then try your best to release a 2.6 that won't need patching. It will anyway, but don't lose sleep over it.

    • This is nice in theory, but the problem is that most of the release-related oops's aren't found until 2.6 is released and 20x the number of people start using the newly labled "stable" kernel.

      That's why Linus tried to release 2.4 when it wasn't quite ready... it wasn't improving fast enough to ever be ready...

      Doug
      • Your point is noted, but you can't have it both ways:

        Either issue a release candidate, with the intent of catching release-related oopses;

        or branch the next version, release it, and backport release-related oopses;

        or try your best, release, and patch release-related oopses to the stable branch either at the same time as the unstable branch, or delay forking the unstable branch for a while.

        As you note, the first two approaches don't give enough feedback. The third results in less than perfect releases. Without a release test plan in place, I don't really know how this problem can be avoided, short of getting people to test "release candidate" releases. In the old days, that's what gamma tests were for -- beta tests from an end-user perspective.

  • by mojumbo ( 464529 ) on Friday July 12, 2002 @09:24PM (#3874756)
    Anyone who runs production systems expects (demands?) even-numbered releases to be stable.

    There's no serious linux admin out there that wants to have to test a new supposedly "stable" kernel for a week before employing it on a bunch of mission critical boxes. Say I want/need a feature in the new release of the "stable" kernel, should i expect anything less that a kernel that is rock solid? There's people still running 2.2 series kernels because of the whole 2.4 feature creep fiasco.

    All the stability issues should be worked out before a kernel is considered "stable." Seems to make sense to me...
    • >Anyone who runs production systems expects (demands?) even-numbered releases to be stable.

      That's right! If it isn't stable, you'll take your OS dollars somewhere else.

      Oh, wait...

    • Actually I demand that my vendor does sufficient and apropriate testing before releasing a kernel. I don't care what branch it comes out of so long as they will support it. Beyond my distro vendor I also need support from my major apps vendors. Rational for instance will not support a redhat kernel for 90 days after errata is posted, so it usually means we are at least one minor revision behind the bleeding edge, but it doesn't matter because redhat backports all important (read security and critical bug) fixes and supplies them to us via internal up2date processes.
  • by iabervon ( 1971 ) on Friday July 12, 2002 @09:43PM (#3874807) Homepage Journal
    Unstable series often start off with versions which break everything, because whatever fundamental change is first up for the series has gone in and the drivers and so on haven't been updated. It was a long time in 2.5 before it was really sensible for people to work on it (aside from the bio work), and people were actually doing their development on 2.4 even after 2.5 had started. In part, this wasn't even an issue of stability: Linus just wasn't taking patches on other subsystems. If 2.7 starts when 2.6 comes out, and major changes go into 2.7.1, people will stay on 2.6 until the first major set of changes in 2.7 has stabilized. Provided that the first thing under development in 2.7 isn't broken in 2.6 (in which case, the people who could fix it would be working on 2.7), everyone important to fixing obscure bugs in 2.6 will still be working on 2.6, but sitting on their patches, because they can't go into 2.6 (not fixes). As the interfaces for 2.7 (where they differ from 2.6) become known, people will start using them, but, until that point, 2.6 and 2.7 are basically the same, except that you can get 2.6 running to develop on.
  • by tlambert ( 566799 ) on Friday July 12, 2002 @10:04PM (#3874875)
    [ ...Putting on my "politically incorrect" hat... ]

    It's a common Open Source Software problem: there is the last release, and there is the developement branch.

    Developers would all prefer that you use the developement branch, report bugs against *that*, provide patches for the bugs against *that*, do all new work in the context of *that*.

    But it's not how things work, outside of an Ivory Tower.

    In the real world, people who are using the system are using it as a platform to do real work *unrelated to developement of the system itself*.

    I know! Unbelieveable! Heretics! Sacreligios!

    FreeBSD has this disease, and has it bad. It very seldom accepts patches against it's last release, even in the developement branch of the last release, if those patches attempt to solve problems that make the submitted work look suspiciously like "developement". The cut-off appears to be "it fixes it in -stable, but would be hard to port to -current; do it in -current, as your price of admission, and back-port it instead, even if you end up with identical code".

    The only real answer is to keep the releases fairly close together -- and *end-of-life* the previous release *as soon as posible*.

    The FreeBSD 4.x series has lived on well past FreeBSD 4.4 -- supposedly the last release on the 4.x line before 5.0. FreeBSD 4.6 is out, and 4.7 is in the planning stages.

    It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD, since nearly all commercially paid developers are running something based on -stable. FreeBS -current -- the 5.x developement work -- is *nearly two years* off the branch point from the 4.x -stable from which it is derived.

    Linux *MUST* strive to keep the differences between "this release" and "the next release" *as small as possible*. They *MUST* not "back-port" new features from their -current branch to their -stable branch, simply because their -current branch is -*UN*stable.

    Delaying the 2.6 release until the 2.7 release so that you can "stabilize" and "jam as many 2.7 features into 2.6 as possible" is a mistake.

    Make the cut-off on 2.6. And then leave it alone. People who are driven by features will have to either run the developement version of 2.7, or they will simply have to wait.

    Bowing to the people who want to "have their cake and eat it, too" is the biggest mistake any Open Source Software project can make.

    Don't drag out 2.7, afterward, either... and that's inevitable, if everything that makes 2.7 desirable is pushed back into 2.6. Learn from the mistakes of others.

    -- Terry
    • And here I was about to mention the positives to mirroring FreeBSD's development model.

      I think you're being terribly naive about this, particularly the It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD comment. Development work on FreeBSD succeeds on both fronts.

      Where commercial development would be done on FreeBSD, there would be as much of a shift in development process as there has been in the shift from 'regular development' to that of entire teams at IBM submitting patches to Linus. At first this seems unviable as well.

      Plainly, it is concievable that were a commercial team to submit changes to -CURRENT, they would be timed for integration into -STABLE in the same manner as current changes are. And try not to make a mistake, a *lot* of things are folded back from -CURRENT on a regular basis. They may have a lengthy test period, but hey, shouldn't every new feature?

      I think the Linux development community will only benefit from a move such as this. It keeps the stable kernels stable, and it folds in new changes in an orderly and well tested fashion. I'm looking forward to seeing it happen.

      • [ Dammit, I hate people who use cookies instead of hidden fields for forms ]

        "I think you're being terribly naive about this, particularly the It's now nearly impossible for a commercially paid developer to contribute usefully to FreeBSD comment. Development work on FreeBSD succeeds on both fronts."

        I have built or contributed to 5 embedded systems products based on FreeBSD. If you count licensing the source code to third parties, that number goes up to 12. This list includes IBM, Ricoh, Deutch Telekom, Ricoh, Seagate, ClickArray, and NTT, among other lessers.

        There has been no case where any of these projects have involved use of FreeBSD-current. It just does not happen: the intent of the commercial is to work on the product itself, not on the platform on which the product is intended to run. Toward that end, every one of these projects has used a stabilized snapshot of FreeBSD, usually a release, and, on only two occasions, a -security (release plus security bug fixes) or -stable (release plus any bug fixes) branch. Under no circumstances has the employer *paid* me to work on -current on their time.

        There are notable exceptions to this practice, where there have been specific DARPA grants, or Yahoo has a number of highly placed people who get to pick what they work on; these opportunities are few and far between.

        "Plainly, it is concievable that were a commercial team to submit changes to -CURRENT, they would be timed for integration into -STABLE in the same manner as current changes are. And try not to make a mistake, a *lot* of things are folded back from -CURRENT on a regular basis. They may have a lengthy test period, but hey, shouldn't every new feature?"

        Your argument is that FreeBSD -current and FreeBSD -stable bear a strong relationship to each other, besides each having the prefix "FreeBSD", and that integration into -current means that testing will be done, and that after testing, integration into -stable will happen.

        I disagree strongly. The two source bases run different tool chains, and they are significantly different, under the hood, as well. It is nearly impossible to write kernel code that operates identically, without substantial modification, on both -stable and -current. The differences in process vs. thread context, and locking *alone* mean that there is not one subsystem in the kernel that has not be touched. This ignores the semantics and API changes, etc., on top of that.

        Despite back-porting, there is nearly two years difference between -stable and -current. I have a system that I updated to -current in October of 2000. It claims to be 5.0. FreeBSD has not made a code cut off the HEAD branch in that entire time -- or FreeBSD 5.0 would have been its name.

        It would be a serious mistake for Linux to follow FreeBSD down this path. Linux should continue to make release code cuts off their HEAD branch, stabilize, *and then deprecating* the releases.

        FreeBSD has failed to deprecate its branches, following releases. This means that if a commercial developer wants to contribute code for inclusion in future version of FreeBSD in order to avoid local maintenance (FreeBSD, unlike Linux, does not *require* such contributions, it relies on this and other emergent properties), they must first take their code from where it's running, and port it to an entirely *alien* environment. Then they must wait for approval, and then back-port it, since no one is going to do the work for them, to the minor version after the one that they stabilized on for their product.

        "It keeps the stable kernels stable, and it folds in new changes in an orderly and well tested fashion."

        `Orderly' and `tested' are one thing. Two *years* of API and interface evolution are something else entirely.

        The first time Linux cuts a distribution that isn't a pure maintenance point release of a minor number (e.g. NOT 2.5.1 off 2.5, and NO 2.6 off of 2.5 if a 3.0 is in the works), it will have effectively forked itself.

        -- Terry
        • on only two occasions, a -security (release plus security bug fixes) or -stable (release plus any bug fixes) branch.

          -STABLE is release plus feature additions.

          FreeBSD would certainly have to change its development model to appeal to the current tides facing Linux. It has perhaps been protected to date by the fact that the project hasn't tried to appeal to outside influence, and has become unattractive in some development situations because of this.

          I think we come to an agreement on the fact that due to the nature of development done on the Linux kernel, stable and testing would have to remain very compatible and very close to each other in the pipeline.

          Whether this is an appropriate move for FreeBSD is a topic for another slashdot article. :P

  • If you after 2.6 emidietly start working on 2.7 then maybe you didn't do a good enough job with 2.5.
  • What if we take another approach? When stable kernels were released in the past, they were not tested enough to be called stable. Hordes of new users would find hundreds of bugs, and the developers had to fix them instead of doing new development.

    Would starting the new development branch immediately after the stable release help? Hardly. It's the time when a lot of work has to be done on the stable branch.

    But what if we make sure that the stable kernel is indeed stable when it's released, not after the "stabilization"? The only solution to make kernel stable is to test it a lot before it's released.

    I don't think we should be afraid of "debian syndrome". Kernel is much more monolithic than a disribution, and if e.g. IDE doesn't work well, it takes much more efforts to downgrade it safely compared to downgrading e.g. Mozilla.

    The fundamental problem with the development branch is that issues with one part of the kernel affect all developers and testers. If I e.g. want to test ACPI and know how to fix it, but I don't know how to fix IDE, I won't test the latest 2.5 kernel.

    I believe that the best solution would be to have branches for different subsystems. IDE changes would be merged to the trunk only when they are stable enough for other developers. It's important that the development on the branches is done openly, step by step, so that an interested developer could find the exact place where a bug was introduced. But this style of development doesn't require doing everything in the trunk. In fact, to keep the kernel relatively stable the development should be done on specialized branches.

    More stable development kernel would mean more testers. More testers would mean stable release, which is truly stable, at least compared to 2.2.0 and 2.4.0. And that would eliminate the need to force developers on stabilizing the branch that is supposed to be stable form the beginning.

  • My $0.02... (Score:4, Interesting)

    by Zinho ( 17895 ) on Saturday July 13, 2002 @03:06AM (#3875876) Journal
    Couldn't the problem be solved by brancing the unstable first, then releasing the stable branch when it's ready?

    For example, let's say that we're happy with the feature set in the 2.5 unstable series. Instead of putting off waiting for all of the bugs to get shaken out and call it 2.6, just switch from 2.5 to 2.7 on the unstable development side. Linus can pass the reins off to someone he trusts, we can have a GROF (Get Rid Of the Fin) party and his trusted lieutenant can finish stabilizing 2.5 into 2.6 without him.

    This solves the problem of wanting to keep back-porting features from 2.7 into 2.6, it allows for time to make sure the 2.5 code is stable before public release as 2.6, and provides a clear feature-freeze mechanism: once Linus is gone, go bugfixes only. If you want the new features, run the unstable kernel or wait for 2.8 (released sometime after 2.9 is branched).

    Not that my opinion matters at all, it's just an idea.
    • I agree. I've been saying this in public forums for years, and hopefully this idea (which is standard in almost every successfuly development house) will get adopted in Linux.
    • For example, let's say that we're happy with the feature set in the 2.5 unstable series. Instead of putting off waiting for all of the bugs to get shaken out and call it 2.6, just switch from 2.5 to 2.7 on the unstable development side. Linus can pass the reins off to someone he trusts, we can have a GROF (Get Rid Of the Fin) party and his trusted lieutenant can finish stabilizing 2.5 into 2.6 without him.

      The thing is, what needs to be done on the future 2.6 branch to stabilize it would also benefit 2.7. So the point of the current development model is to keep only one branch until it's really stable, then create the next development kernel branch on something which is sane. As a bonus you don't do the same work twice (stabilize the two branches for the same issues).

      Now, if Linus is not the one you want in charge of this, he could always back-out of the last stabilizing efforts (IIRC, he doesn't particularly appreciate that part of development).

      Branching before having a stable release would only cause both branches to diverge too much (especially in terms of bug fixes and drivers). And if you never have a "stable" development branch, it's kinda difficult to develop effectively on it. For example, see the current IDE situation in 2.5. To really develop on 2.5 atm, you need a SCSI machine because IDE seems too broken. Of course it will stabilize for 2.6, but if 2.7 is branched now, and then IDE is stabilized in 2.6, 2.7 won't necessarily have all the fixes (or some fixes won't make it to 2.6), if only because of communication problems.

      So I think they'd better to work on 2.5 for now (obviously), then get into feature freeze, stabilize, release 2.6, wait a couple minor releases for it to be really stable (as in you'd be confortable using it on low-criticality (sp?) production machines after enough testing), and then start 2.7. Then 2.7 will start in a "known good" state.
  • OK, so as background, I just woke up ~ 5 minutes ago, so the coffee isn't finished brewing, much less finding it's way into my body yet... I read the headline and the first thing that came to mind was "What? OpenBSD 2.6/2.7? I manage those releases by keeping them neatly stacked in my pile-o'-unixen under 2.8 and 2.9..." Then my brain assimilated the fat little bird under the topic, some gears churned, a little smoke came out, and I realized we were talking about Linux... ;-) The moral of this story I think is don't read slashdot right after you've woken up...
  • Either way is wrong (Score:2, Interesting)

    by IkeTo ( 27776 )
    Okay, what stable is, really? What does it mean to release 2.6.0?

    To me, 2.6.0 means "okay, this is what we can possibly get if only developers are running the code. We have tested our kernel, we have high confidence that it will work for you, but, you know, there are surprises. So do try it out, if you can. We promise that if you find problems and tell us, we will put you to the highest priority, so that you don't have to fall back to 2.4.XX."

    What is 2.7.0? People says that it means "okay, now we have 2.6.Y stable, we can pretty much ignore it. Let's put it in the hand of Xyz Abc, the new maintainer of 2.6 series, and new work will be placed at 2.7.ZZ". But I don't like this view. This ignores the possibility that new thing can land directly into 2.6.XX. This happened quite frequently in 2.4.XX, actually, and it does work.

    I believe the real reason for 2.7.XX is that "after some use, we find that 2.6.XX has the following stupid problems. It can also be improved if we don't do things this way, but instead do things that way. But they are so fundamental to 2.6.XX, that if we ever change it, we can no longer make the claim that we made when we roll out 2.6.0. These things really needs to be done, though, but we prefer people not to use it yet, and we developers will try to make things work again after they break, and after every developers can reasonably make the claim we made when we delivered 2.6.0, we will roll out 2.8.0, when every of you can try this new neat way of doing things. Currently, please stick with what we have in 2.6.XX."

    If that reasoning can stand, then what 2.7 is for is really new API. A new one that can cause everything else to break. I'd say, once we know what new API we want to create, we should create 2.7.0, *regardless* of whether 2.6.XX is stable enough or not. It is absurd to be afraid that stablization of 2.6.XX will slow down because of the existence of 2.7.YY: preference is always given to 2.6.XX if things go wrong there. The real problem to release 2.7.0 too early is that many things get implemented too quickly, when most of the API changes are still up in the air, forcing most things to be written again, perhaps for many times. When that "up-in-the-air" problem goes away (or has settled to a point that we want to write and see what will happen if we really do things in the new way), there is no excuse not to release 2.7.0. Further delay only makes sure that the next kernel will arrive late again.
  • Slightly offtopic:

    It's really frustrating not to have a Linux kernel bug tracking system available. Searching through the huge lkml mailing list just doesn't cut it. And some questions pop back up every month, with no sign of it ever being addressed.

    Example: the Athlon/VIA chipset freeze bug. Is it a chipset bug? A bios bug? A kernel bug? Is it fixed? Is it AMD's fault? Is it VIA's? Is it Linus's? Is it a PCI latency problem? An IDE problem?

    Who the fuck knows!
  • This is an honest problem with Linux credibility. The way I see it Linux needs to get on a serious business/seasonal calander. They need to provide 1 ultra-stable release per year and 1 continuous development release per year. Also provide a compatibility suite for each level of the OS. One for the Kernel, one for X11, one for KDE/Gnome,and one for apps. These would be key programs that all sub versions of the stable even-numbered release must be able to run! The best desired system would be a Gentoo style system merged with cvs controls, bug reporting, and problem handling forums that additionally could be used by developers to concurently get the latest code and stamp out bugs faster before actual release. Linux will only grow by leveraging the real power of the internet of mass concurrency of code and extend to automated forums and distributions that take care of themselves without user intervention.

It is easier to write an incorrect program than understand a correct one.

Working...