Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Time for a Linux Bug-Fixing Cycle 236

AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."
This discussion has been archived. No new comments can be posted.

Time for a Linux Bug-Fixing Cycle

Comments Filter:
  • by WillerZ ( 814133 ) on Tuesday May 09, 2006 @07:36AM (#15291952) Homepage
    As a user, I preferred the old odd/even unstable/stable code split; I'd run .even at work and .odd at home.

    I suppose if you buy your linux off the shelf you can complain to your vendor, but for home users looking to do some DIY kernel building the new way is a bit worse. However, I suspect we're a dying breed...
    • The current system facillitates this as well -- I run 2.6.anything.somthinghigh on my servers and 2.6.anything at home and it works quite well. The -stable team are really providing an excellent service with their work beyond the 3rd dot, and they let the main line kernel move at a quicker pace than having the alternating odd/even system.
    • by s31523 ( 926314 ) on Tuesday May 09, 2006 @08:16AM (#15292123)
      I wouldn't so much say we're a dying breed... Rather, I would say that the numbers of people that do their own Kernel building is growing, but the number of people that just buy a distro and install and "hope everything just works" is growing much much faster, which can be viewed as a good thing, since the more people that use Linux will cause commercial Vendors to take note and support Linux more readily. Although, I will miss being that nerdy guy who doesn't run Windows...
      • You can keep your prized nerdiness by switching distros. While the rest of us use Ubuntu to get work done, you can have fun compiling Gentoo, or even better LFS.
        • "While the rest of us use Ubuntu to get work done"

          I hear this a lot but it goes against all my experience. Usually the people I met who compile their kernels and do other geeky things tend to get way more work done then the people who want everything dropped on their laps.

          Am I hanging out with a different crowd then you? The people I meet who use computers while not understanding anything about them tend to be some of the least productive people in any business. It's always the savvy guy/girl who can use th
    • I don't, the new model work much better for me (and my customers) because:

      • My customers want a stable, certified environment where their applications work reliably. Commercial linux distros (RedHat, SuSE) provide this very nicely. The kernel used there is usually quite old, but remains the same over years.
      • At thome a want the leading edge stuff to play. Stability is less important, but support for the latest gadgets is. The kernels I use tend to be the latest from kernel.org or gentoo (if I'm lazy...).

        I'm

  • by mlwmohawk ( 801821 ) on Tuesday May 09, 2006 @07:44AM (#15291982)
    I have been using Linux since the early 1990's and I've been a software developer for almost 30 years. The one ting that concerns me, and I think this recent indictment is just a symptom of a larger problem.

    The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.

    This is where I believe Linux is stuck. Linus and the kernel team has to look at the various kernel APIs and standardize them with the next release.

    Sorry guys, time to grow up. Linux *is* mainstream!
    • The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.

      No - The kernel API (whilst not set in stone) is quite stable & doesn't change ofte
      • "why should the linux kernel team care about people who want to leverage the linux kernel without contributing their source code back"

        Because most companies in this position make hardware, not software. Until the open source hardware movement takes off (and there's good reasons why it never will), they make their money from their hardware and doing anything that gets in the way of this is a bad thing.

        It's not about not giving anything back (the company i used to work for released closed source drivers, and
      • So, if you have a Linux kernel driver that is not in the main kernel
        156 tree, what are you, a developer, supposed to do? Releasing a binary
        157 driver for every different kernel version for every distribution is a
        158 nightmare, and trying to keep up with an ever changing kernel interface
        159 is also a rough job.
        160
        161 Simple, get your kernel driver into the main kernel tree (remember we
        162 are talking about GPL released drivers here, if your code doesn't fall
        163 under this category, good luck, you are on you
        • No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses. If nVidia or ATI or any other hardware manufacturer do not want to license their software as GPL it is their decision. The operating system MUST provide a standarized API.

          If you really respected the GPL so much, you'd have read it. Binary kernel modules are forbidden by a strict interpretation of the GPL; kernel developpers have merely tolerated them. Notice the warning in dmesg when you insert nvidi

          • They let it go because they don't know if the driver is a *derivative work* until they sue for the details. A judge might not think a patched Windows driver with a compatibility layer is a derivative work of the kernel...
        • No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses.

          Agreed. Open source is a choice, and not chosing to OS a driver code package does not immediately or synonymously make a company evil. Most people want Linux to start playing in the same space as Windows (well, at least OSX) in terms of user numbers. This will never happen unless hardware vendors are allowed to create binary drivers for their products.

          Look at the video card space - drivers can so

          • But I'll take the card without that 20% and open source drivers any day.

            Your +20% card will be -20% card in a year anyway.
          • Most people want Linux to start playing in the same space as Windows (well, at least OSX) in terms of user numbers.

            "Most people" meaning all the people that want Linux to just work without giving anything to it to make it better. The folks who want a free (as in beer) Windows. Don't get me wrong, I love that fact that my Mom uses Linux and she doesn't give anything back to it. And I honestly can't say I am a major OS contributor other then advocacy and local support for my community but I also have no inter
            • But she does give something back to it.
              For a startoff their is mindshare - might not get you many geek points, but that fact that the average person on the street is starting ot hear of and use linux helps everyone, especially those who develop.
              Second if she chooses hardware that is supported under linux, then she is helping the case for more people supporing Linux hardware - the more money people make from selling compatable hardware, the more companies will put effort into drivers.
              Third, with any luck it'
        • by just_another_sean ( 919159 ) on Tuesday May 09, 2006 @09:03AM (#15292332) Journal
          The operating system MUST provide a standarized [sic] API.

          People who code free software MUST not do anything unless they feel like it. Sure some of them might get paid by Company X to develop Driver Y or Application Z but they do so on the shoulders of what's already been put in place by free software developers.

          If Linus and the rest of the kernel developers decide at some point to provide an ABI that proprietary companies can use to build their drivers, all the while clinging to their dated business methodologies and obsession with "IP", then great, that's their choice. It might take a Herculean effort to get all those copyright holders to agree and do it but if they can then that's up to them.

          Conversely, if they choose not to, they under no obligation to provide anything. Nobody on the kernel team, IMHO, ever got together and said "we need to start coding and provide some free software so companies with no interest in participating in the process can take our free software and make some money selling hardware". They do it for themselves, their friends and family, their community. Whether or not ATI and NVIDIA want to be a member of that community entitles them to exactly nothing.
          • The current situation is ridiculous, though, regardless of whether or not you believe that binary drivers are acceptable.

            First of all, I don't believe that Linus et al refuse to provide a stable kernel API merely so they can snub companies who only release binary drivers (although obviously it is perceived by many as a nice side effect).

            Providing a stable kernel API would provide substantial benefits for open source drivers as well in terms of reduced maintenance. This would especially be true for hardware
            • Agreed. And I didn't mean for my post to come across as another typical "it's free damn it, if you don't like it build your own kernel" rant.

              I just don't understand where people come off dictating what should or shouldn't happen to free software based on the needs of ATI, NVIDIA or any other commercial entity that tries to reap the benefits of free software all the while snubbing the development model that makes it possible in the first place.

              Now you could argue that ATI and NVIDIA don't really get much out
          • Ok I'm building a new network card with uber hardware protocol assist.

            I've invested $30Mill on this thing. I release the source code. What do I get? Well some people might find bug fixes, but at $10 Mill per spin of the masks, I'm not going to get very rapid development cycles. The kid in the backroom that is good at re-compiling the kernel isn't going to be able to do the same thing for this chip as he can in the Kernel.

            Meanwhile my competitor and all my partners now know every trick I have. and can take t
          • If Linus and the rest of the kernel developers decide at some point to provide an ABI that proprietary companies can use to build their drivers, all the while clinging to their dated business methodologies and obsession with "IP", then great, that's their choice.

            Here's what you guys are overlooking. There is a stable kernel ABI. It's called "RedHat Enterprise". Look at high-end storage drivers and the like and they only come in binary and only "certified" for certain distributions. If you want stability, yo
        • Point 1 - Your post contradicts its own supposed respect of the GPL.
          Point 2 - Linux is FSF free, share and share alike by license. BSD is not. You can't generalize them together on this issue. If you don't get the difference, you don't know what the hell your talking about.
          Point 3 - The operating system doesn't _have_ to do shit. If the companies want their shit to run in Linux, the should submit GPL'd drivers or suffer their rightful hell for being miserly with their code in a project based on sharing.
      • I have read this piece before, and while I think it is very good, it and I both agree that a "binary interface" is a bad idea. I am not suggesting that at all. I am suggesting that, as part of the kernel, define a stable API.

        Look at the current APIs, augment or "bless them."
        Don't access structures, use macros.
        Bless tried and true interfaces, and make damn sure no one changes them without keeping backward compatibility.
        Assign temporary status to "experimental" interfaces.

        Maybe create a synthetic API layer an
    • As a software developer whose experience goes back more than 40 years, to the Stanford Time-Sharing System on the DEC PDP-1, I can assure you that the only way to keep the kernel API from changing is to kill the project. Just as you wouldn't expect a driver written for Microsoft's MS-DOS to be effective on a modern NUMA machine, you shouldn't expect any driver interface standardized today to be effective 10 or 20 years from now. An attempt to freeze the driver API would hamstring the kernel developers, making the kernel less interesting to work on. Somebody would fork it, to lift the compatibility restriction, and the new kernel would work much better with modern computers, causing everyone to migrate to it.

      The only way to keep Linux relavent it to let it evolve. Yes, that creates a burden on driver writers. Linux has a partial solution: keep your drivers in the kernel source tree, and test each kernel to be sure your driver still works. When it breaks the cause should be obvious, and easily fixed. If you are lucky, the person who changed the API will also update your driver, but you can't count on that, which is why you must test.

      • I can assure you that the only way to keep the kernel API from changing is to kill the project.

        You don't have to stop the API chaging, you just have to stop it changing all of the time. Doing that also give you the added benifit that third party vendors don't keep pulling their hair out because the kernel API keeps changing so they may be more included to actually release drivers in the first place.
      • by MROD ( 101561 ) on Tuesday May 09, 2006 @09:01AM (#15292321) Homepage
        I disagree.. mostly.

        There needs to be a stable API for drivers PER MAJOR RELEASE so that the driver maintainers can keep stable, well tested and debugged drivers.

        The API should be allowed to change with every major kernel revision but any change should be made with a great deal of thought and, unless it's very difficult to do, the old API should be supported for backward compatability.

        Not only this, but I would argue that it would be good hygene to separate the core kernel from the drivers. Doing this would make developers think hard about the bounderies between the two and not have one polluting the other. It would also make the developers think long and hard about whether changing the API for something is such a good idea just because it would be useful for the "ACME USB SLi Graphics card programming port widget" interface.

        The the kernel is the kernel, the drivers are merely plug-ins to virtualise the hardware, the two should be as separate and distinct as they are logically.
        • I am with you except for supporting the old API. When a new major relase comes out it should not be backward compatible at the driver level. If it is we will just end up with a bunch of unmaintained drivers sitting about; which will lead to problems. The other thing is all that backward compatibility would add tons of cruft to the driver layer which would eventally just slow down development. People can wait for the drivers they need to be ported to the next major version, before they upgrade.
          • Doesn't ALSA effectively do this? I thought ALSA was a kernel module that provides an API for other kernel modules. That way, when the kernel API changes, the developers only need to modify ALSA, not every driver. Can't they just make something like that for each type of driver?
      • As a software developer whose experience goes back more than 40 years, to the Stanford Time-Sharing System on the DEC PDP-1, I can assure you that the only way to keep the kernel API from changing is to kill the project.

        I refer you to solaris. Still very much alive, and backward compatible as far as version 5.

    • The problem is that the drivers have to remain in constant flux because the kernel API is always changing

      Well, this is just a consequence of what people is discussing. In the current 2.6 model you're allowed to merge new features (ie: things that break the kernel API). What you're proposing is to go back to the stable/unstable development model.

      But I don't think the "changing kernel API" is the source of problems - if a driver doesn't work, it's because it doesn't have a good maintainer. Releases take aroun
    • Thank you.

      I would like to modify this slightly. I don't think a single DDI (device driver interface) will work, but several DDIs can be defined:

      A low level SCSI DDI
      A low level audio DDI
      A low level network DDI

      and maybe others. Factor the drivers, and extract common parts into the appropriate DDI.

      Now, a vendor would write to that DDI, and the Linux team would have to promise that the defined DDI would have a lifespan of (?, but as long as possible). Any drivers needing a custom kernel interface would be plant
      • Defining, documenting, maintaining and verifying consistency of those DDIs would be a lot of work, taking time away from other tasks, and constrainig the pace and direction of development.

        Where's the benefit of all that effort? Enabling closed-source drivers that destabilize the kernel in difficult-to-debug ways? Creating a situation where more and more users are running kernels the kernel developers refuse to debug?

        I don't see why it would make any sense at all to put all of that effort into creating

        • I didn't say a thing about open vs. closed source drivers. Indeed, I think that that open source is the way to go.

          The purpose of building the DDI is to allow the drivers to be abstracted away from the kernel. This already occurs, but is not an official policy. If the drivers are so abstracted, kernel development can push forward at an accelerated rate.

          The reason is simple -- instead of having to check hundreds of drivers against a change, the interface can be checked. As long as the drivers all conform to t


    • I fully agree with this, and I see this as one of the biggest barriers to enterprise adoption.

      Vendors such as Red Hat and Novell attempt to give the client this - in fact they both guaranteed no API / ABI changes within a version (e.g. RHEL 3). This was difficult but feasible under the old model, but from what I can see it is impossible with the new development model.

      They either have to stick with one specific kernel version for the entire lifetime of the product, backporting the things they need from new k
    • I used to think so at first, but that's not going to happen, because it is un-workable. It is un-workable because Linux supports more and more devices over time. It takes a while for the common API to perculate up.

      If we could "freeze" what devices the kernel supported, then I would agree, 'stabalizing' the kernal API would be a great feature, for each of the 2.x series independently. But that is just not going to happen when the _hardware_ itself changes. i.e. new buses (PCI Express), etc.

      In fact, there
  • "Kernel developers will need to reapportion their time and spend more time fixing bugs. We may possibly have a bug-fix-only kernel cycle, which is purely for fixing up long-standing bugs."

    Sounds like this approach will benefit everyone in the long run, instead of constantly playing catch-up later?
  • by greppling ( 601175 ) on Tuesday May 09, 2006 @08:34AM (#15292199)
    can be found in a post [livejournal.com] in his live journal. He reports that with every new kernel release, the number of kernel related bug reports in the Fedora bugzilla goes up substantially.

    (Davej is a long time kernel hacker and currently the Fedora kernel maintainer.)

  • Good ol' Pat... (Score:3, Interesting)

    by zenmojodaddy ( 754377 ) on Tuesday May 09, 2006 @08:53AM (#15292286)
    Does this mean that everyone who has complained or criticised Slackware for sticking with the 2.4.* kernel for the sake of stability will apologise?

    Not holding my breath or anything, but it might be nice.
    • 2.6.16 has had a lot of bug fixes [from 2.6.16.5 to the current version is pretty much fixes].

      I've been running 2.6 since 2.6.10 [or so] without any significant problems or stability issues. x86_64 support was better initially with 2.6 than 2.4 as well.

      Perhaps they should spend a few months fixing bugs but I wouldn't favour 2.4 over 2.6 any day.

      Tom
  • A few comments have flown already, but let's be sane here and examine microkernelism.

    File system crashes. Microkernel is going to panic because there's no way in hell it can guarantee consistency with running processes now; the FS driver might log-replay or FSCK, but all open file handles become invalid (this can be reduced though...). A monolith is also going to panic; the driver may be in a kernel thread and get flushed and re-initialized, but same problem with file handles.

    IDE driver crashes.

  • I prefer the current development model over the old one, upgrading from 2.4 to 2.6 was much more difficult then it should have been, mostly due to the learning curve though I think.

    However the current development model does "seem" to have introduced more instability into the kernel then I remember. (My box at home seems to crash every 17days like clockwork.) Even with the 2.6.x.y releases, they are only maintained for a couple .x releases, so even those don't get a chance to stabilize much. I haven't NOTICE

It is easier to write an incorrect program than understand a correct one.

Working...