Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Time for a Linux Bug-Fixing Cycle

ScuttleMonkey posted more than 8 years ago | from the breath-of-fresh-air dept.

236

AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."

cancel ×

236 comments

fp (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15291945)

fp

I preferred the old odd/even split (5, Insightful)

WillerZ (814133) | more than 8 years ago | (#15291952)

As a user, I preferred the old odd/even unstable/stable code split; I'd run .even at work and .odd at home.

I suppose if you buy your linux off the shelf you can complain to your vendor, but for home users looking to do some DIY kernel building the new way is a bit worse. However, I suspect we're a dying breed...

Re:I preferred the old odd/even split (2, Insightful)

lostlogic (831646) | more than 8 years ago | (#15291963)

The current system facillitates this as well -- I run 2.6.anything.somthinghigh on my servers and 2.6.anything at home and it works quite well. The -stable team are really providing an excellent service with their work beyond the 3rd dot, and they let the main line kernel move at a quicker pace than having the alternating odd/even system.

Re:I preferred the old odd/even split (3, Insightful)

Skuto (171945) | more than 8 years ago | (#15292103)

2.6.16 fixed a critical vulernability over 2.6.15. It also breaks several network drivers.

There was a time when you could grab the next stable kernel, for example when there was an exploit and you really had to, and you'd know you'd only get *more* stability. Now it's exactly the opposite. If you have to upgrade, you're just screwed.

This started around the time they added reiserfs in the stable series although it was far from stable yet. It's not new in the 2.6 series, really. It's a wrong philosophy.

Compare this to FreeBSD release engineering with RELENG, STABLE and CURRENT. FreeLinux anyone? :-P

Re:I preferred the old odd/even split (1)

Brunellus (875635) | more than 8 years ago | (#15292200)

I know we're talking about the kernel, but isn't this sort of the same thing as Debian's Stable/Testing/Unstable versioning scheme?

Re:I preferred the old odd/even split (1)

Skuto (171945) | more than 8 years ago | (#15292231)

Yes, it's pretty much the same thing. You can pick a specific RELENG version, though, like RELENG_6_0, which puts you on FreeBSD 6.0, and you will still receive all critical vulnerability patches, but no changes to anything else (which prevents regressions as in the Linux example).

Re:I preferred the old odd/even split (4, Insightful)

Rattencremesuppe (784075) | more than 8 years ago | (#15292260)

2.6.16 fixed a critical vulernability over 2.6.15. It also breaks several network drivers.

Stable driver APIs anyone?

Oh wait ... stable driver APIs promote binary drivers ... EVIL EVIL EVIL

Re:I preferred the old odd/even split (0)

Anonymous Coward | more than 8 years ago | (#15292486)

"Oh wait ... stable driver APIs promote binary drivers ... EVIL EVIL EVIL"

Stable ABI *does* promote binary drivers. Stable API promotes a stable platform to develop against. It might favour binary drivers too, but only on the developer's side (an end user still should compile the driver again against the new kernel version) and only as a colateral factor, avoidable if really needed/wanted (per GPL license).

Re:I preferred the old odd/even split (2, Informative)

LWATCDR (28044) | more than 8 years ago | (#15292531)

Actually there is supposed to be a stable API for drivers. What you are thinking of is a stable binary interface.
And yes I would like to see both.

Re:I preferred the old odd/even split (1)

thenerdgod (122843) | more than 8 years ago | (#15292417)

Not just that, but 2.6.16 also broke LVM2 [lkml.org] for every distribution that uses the 'stable' lvm2 toolchain. Basically, you can fix your vulnerability, or you can hose your machine every time you try to back up. Good job.

Re:I preferred the old odd/even split (0)

Anonymous Coward | more than 8 years ago | (#15292135)

The current system facillitates this as well -- I run 2.6.anything.somthinghigh on my servers and 2.6.anything at home and it works quite well.

Hmm, aren't the 2.6.x.y releases the development versions and 2.6.x the releases? Whoever thought up this scheme is a fucking idiot. Go back to the even/odd system so at least we don't have to fuck around with unstable buggy kernels. I'm halfway tempted to switch to FreeBSD at this point with all these problems.

Re:I preferred the old odd/even split (5, Informative)

Mr Z (6791) | more than 8 years ago | (#15292156)

No, the 2.6.x.y are patch releases of 2.6.x. The development releases are 2.6.x-preY. The release candidates are 2.6.x-rcY.

Makes sense to me at least.

--Joe

Re:I preferred the old odd/even split (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15291965)

As a user, I preferred the old odd/even unstable/stable code split; I'd run .even at work and .odd at home.

As someone who doesn't really keep up to date with Linux politics, I was wondering could someone explain to my why this (IMHO) good development model was abandoned in favour of continuous feature-adding in the 2.6 kernel? Was it something Linus wanted to do, or was he pressured into it?

Re:I preferred the old odd/even split (5, Informative)

gowen (141411) | more than 8 years ago | (#15291993)

I was wondering could someone explain to my why this (IMHO) good development model was abandoned in favour of continuous feature-adding in the 2.6 kernel?
It was very, very slow, (ironically, even Andrew Morton complained about this). This meant that desirable new features would be backported to the stable branch anyway, either in mainstream or vendor kernels (with all new bugs), which kind of defeated the object.

So it increased the workload, didn't seem to offer massive stability benefits (although, maybe it did, in retrospect), it reduced the amount of testing the new features got, and limited the workloads on which they were tested.

Personally, I find the present -stable branch of non-bleeding edge kernels to be as solid as 2.4 and 2.2 ever were. I do think we've a tendency to look back at that dev-cycle with rose-tinted glasses. It's not as if 2.4 or 2.2 were reasonably bug-free until the twentieth cycle or so.

Re:I preferred the old odd/even split (2, Interesting)

WillerZ (814133) | more than 8 years ago | (#15292040)

The main difference was that if 2.4.x was good for you there was a very good chance that 2.4.(++x) would be good for you as well. Now, however, nothing is off-limits; so that is less true.

(Yes, I recall some times in the 2.4.x era when this wasn't true either.)

Re:I preferred the old odd/even split (2, Interesting)

moro_666 (414422) | more than 8 years ago | (#15292224)

it depended on the machine you had.
my ide/ata interface was broken 3 times in the 2.4.x series ... but at least alan was a good fellow and fixed it quickly with the -ac patches ;)

i started to use linux quite late, on the 2.2 series ... and the 2.4 seemed rather unstable at times.
2.6 ... the dev. model has changed so much that there isn't really a possibility for a comparision here

i miss -ac series, i miss the stability and i welcome my new freebsd overlord for now, after all it's a choice of a tool that lets you do the work. everyone should pick what they like. if you want to be rock stable, look at 2.2, if you want to bleed the edge and the stability out of it, sit on the latest 2.6, if you are tired of all that mess, you can try freebsd aswell.

ps. tannenbaum, where is your post about how microkernels would prevent all of this ?

Re:I preferred the old odd/even split (1)

richlv (778496) | more than 8 years ago | (#15292114)

It was very, very slow, (ironically, even Andrew Morton complained about this). This meant that desirable new features would be backported to the stable branch anyway, either in mainstream or vendor kernels (with all new bugs), which kind of defeated the object.

well, i think that was referring to 2.4 being the latest stable for a looong time.
overall i enjoy 2.4 for some simple production servers where a custom kernel is needed - it's life cycle is quite long.

it seems to me that branching 2.7, but trying to reach next stable faster could be used as a middle ground - this way we get a stable version and a development version that isn't development version for several years.

hopefully new rapid development scheme will pay back with nice features. maybe a "breather" release should be made each 5 or ten releases apart ? :)

Re:I preferred the old odd/even split (1)

Mr Z (6791) | more than 8 years ago | (#15292258)

it seems to me that branching 2.7, but trying to reach next stable faster could be used as a middle ground

I think that's been tried, or at the very least discussed and ruled out.

--Joe

Re:I preferred the old odd/even split (1)

richlv (778496) | more than 8 years ago | (#15292289)

hmm. haven't read about it anything. any links ?

Re:I preferred the old odd/even split (1)

Mr Z (6791) | more than 8 years ago | (#15292456)

I did go looking for links but it's been hard going. One reference at Linux Weekly News that Linus wanted the 2.3.x dev cycle to be "half the length or less" of the 2.1.x dev cycle. (In reality, 2.3 took 18 months vs. 2.1's 27 months--shorter, but only by a third.)

Then, when 2.5 kicked off, they said they wanted to shorten 2.5.x over 2.3.x [com.com] , and, well, 2.5.x took 25 months.

BTW, you can look at kernel.org [kernel.org] if you don't beleive me on the timeline.

--Joe

Re:I preferred the old odd/even split (1)

Mr Z (6791) | more than 8 years ago | (#15292475)

Oops, botched the LWN link. [lwn.net]

Re:I preferred the old odd/even split (5, Insightful)

diegocgteleline.es (653730) | more than 8 years ago | (#15292165)

The "stable/unstable" development model does not work so well with huge projects like the linux kernel is.

With the old model, the linux kernel would start a unstable release and people would start adding stuff which not the care you'd put into merging something in a stable tree, is not tested a lot, etc...

Now keep this for one, two years. When you decide to release the unstable tree as the next stable version you realize that your unstable tree is full of crap, and you need to waste months or years (Vista) trying to stabilize it. Even when you release the .0 version it's still unstable, so people has to wait even more months to start using it.

The "new" development fixed that. In the current linux development model people is allowed to put new features in the kernel even if they're invasive. But programmers are not allowed to put crap in the kernel, they need to be VERY WELL tested (in the -mm tree) and reviewed, show numbers that back your words if neccesary, document things, etc. Of course no code is free of bugs, so the released version will not be 100% stable as current 2.4 is, but it's QUITE stable.

Because the features are merged progressively, it's MUCH easier to find and fix bugs. Even if there're new features in every release, there're not a LOT of new features - it's much easier to find out what feature broke something between two releases. Compare it with a stable/unstable development model: People keeps adding things for years, when the user switches from 2.4.x to 2.6.0 his kernel doesn't boot. How do you find out who broke that with so many changes?

IMO, from a Q/A POV, the new development model has more sense than a pure stable/unstable development model. It's about "progressive" vs "disruptive", and for projects with several millions of lines and so many contributors it may have sense. Of course, because new things got added there're always some bugs, which is what people is bitching about today. Maybe this could be fixed by leaving the current tree as "stable" and start a new tree - but instead of a "unstable" 2.7 tree, a 2.8 "stable" tree. A pure unstable release doesn't works that well with huge projects like the linux kernel. Remember the hell that FreeBSD 5.x was and how much has affected to the FreeBSD project, remember windows Vista. Maybe it works for some people, but I don't thing it's the best development model for such projects. Solaris is also using this model to some extent - they release things into opensolaris, but what you see in opensolaris is not the "official stable release", it only becomes "stable" after a while.

Re:I preferred the old odd/even split (1)

Skuto (171945) | more than 8 years ago | (#15292290)

>The "stable/unstable" development model does not work so well with huge
>projects like the linux kernel is.

I think FreeBSD is nice proof that you are wrong. See below.

>Remember the hell that FreeBSD 5.x was and how much has affected to the
>FreeBSD project, remember windows Vista.

With FreeBSD 5.x, if you had a working system (be it 4.x), you could choose not to upgrade until the mess was sorted out. Such things are mostly impossible with Linux, because critical fixes are intermigned with new bugs.

The system is harder on the developers at the benefit of giving the users much more stability and security. I think Microsoft understands this very well.

Re:I preferred the old odd/even split (3, Insightful)

diegocgteleline.es (653730) | more than 8 years ago | (#15292312)

With FreeBSD 5.x, if you had a working system

I'm not saying you couldn't choose a stable FreeBSD version - you can run a 2.4 kernel if you don't like 2.6, aswell.

I was talking about development models. 5.X was a disaster, and this is something that even the core FreeBSD developers have accepted (they have changed a bit their development model to avoid the 5.x disaster again, you know): Too many time, too unstable, too many time to stabilize. 6.1 (which was released today, BTW) is great, sure. That doesn't means the development model is the best

Re:I preferred the old odd/even split (0)

Anonymous Coward | more than 8 years ago | (#15291966)

As a user I prefered the old system of publishing dupes and letting the users complain about it.


I think it's a PITA when the dupeness is mentioned in the article itself.

Dying breed? (0)

Anonymous Coward | more than 8 years ago | (#15292115)

That's the entire reason a free kernel exists, some folk really should just stick with Windows. That said, every time I come to upgrade (and I do track releases) there's a bunch of new configuration options that are never going to be relevant to anything I'm going to be involved in. 2.6 has been reasonably stable for me but there's just too much crud bundled with the mainstream release that ought to be split off into patch-sets.

Re:I preferred the old odd/even split (4, Interesting)

s31523 (926314) | more than 8 years ago | (#15292123)

I wouldn't so much say we're a dying breed... Rather, I would say that the numbers of people that do their own Kernel building is growing, but the number of people that just buy a distro and install and "hope everything just works" is growing much much faster, which can be viewed as a good thing, since the more people that use Linux will cause commercial Vendors to take note and support Linux more readily. Although, I will miss being that nerdy guy who doesn't run Windows...

Re:I preferred the old odd/even split (1)

towsonu2003 (928663) | more than 8 years ago | (#15292501)

...I suppose if you buy your linux off the shelf you can complain to your vendor...
Otherwise complain to Linus? He'll bite your head off and give it as bait to the kernel. :-)

Linux is BUGGY so it IS about TIME ! (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15291972)



Linux is BUGGY so it IS about TIME ! In 10 years of NT I've had two panics, and one was when my wife was seeing the neighbor too often. In 1 year of Linux I've had a dozen or more, and the wife hasn't been at the neighbors in a long while.

Re:Linux is BUGGY so it IS about TIME ! (0, Offtopic)

mlwmohawk (801821) | more than 8 years ago | (#15292029)

Oh please, this is total BS. The only way you could have a dozen or more panics in a year is if you have a VERY BAD device and driver and have not looked at addressing the problem.

I've been running Linux since 1994, and exclusively since 1996. I have seen two panics, having to do with an old 3Com card and an old Promise IDE driver.

When you get any sort of prolem, in Linux, Mac, or what ever, you look for the updated driver and get it, or if there isn't one, troubleshoot the system. Unlike Windows, crashing on UNIX is not normal, and a sign of a problem that won't go away on a reboot.

For the record, I have a couple servers using 2.6.x that have been up for almost a year with no issues. The last reboot was because of a power failure at the colo.

Re:Linux is BUGGY so it IS about TIME ! (1)

tobybuk (633332) | more than 8 years ago | (#15292060)

>> Unlike Windows, crashing on UNIX is not normal

You're as bad as Microsoft! Windows (XP/2003) does not crash unless there is a buggy device driver. If you think it does state your verifiable, FUD free source please. Otherwise take your Windows FUD elsewhere. What's good for the goose....

Re:Linux is BUGGY so it IS about TIME ! (3, Insightful)

TerminaMorte (729622) | more than 8 years ago | (#15292109)

While I'm not sure if he meant it this way, it sounds to me that he's saying that it's not considered terribly odd for Windows to crash; not that Windows constantly crashes.

If a desktop user sees a blue screen of death (device driver, bad hardware, what have you) it's nothing incredibly shocking; we've grown used to it over the years.
 
Linux has certainly crashed on me (mostly when trying out drivers that arn't exactly stable), and when it happens it is a much rarer (and stranger ;)) occurance.

Certainly you agree that Windows (he didn't specify XP/2003, remember, just Windows in general) is known for problems like that more than Linux is?

Re:Linux is BUGGY so it IS about TIME ! (0)

Skuto (171945) | more than 8 years ago | (#15292125)

>Certainly you agree that Windows (he didn't specify XP/2003, remember, just
>Windows in general) is known for problems like that more than Linux is?

Maybe because Windows is just more well-known in general?

Re:Linux is BUGGY so it IS about TIME ! (1)

Silmeria (972282) | more than 8 years ago | (#15292137)

1. Start regedit 2. go to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\i8042prt\Parameters 3. Create a new DWORD value and name it CrashOnCtrlScroll 4. Right click on this newly created value and click "Modify" 5. Enter 1 in the Value data field and click on OK. 6. Close regedit and reboot the computer 7. hold the right CTRL key and press "Scroll Lock" twice to see the BSOD on demand.

Re:Linux is BUGGY so it IS about TIME ! (-1, Redundant)

Anonymous Coward | more than 8 years ago | (#15292141)

...is good for the gander.

You're a Microsoft shill! Windows (XP/2003) does crash all the time because all the device drivers are buggy. If you think it doesn't state your verifiable, FUD free source please. Otherwise take your Windows FUD elsewhere.

Re:Linux is BUGGY so it IS about TIME ! (1)

mlwmohawk (801821) | more than 8 years ago | (#15292340)

OK, here's the difference between Windows (DOS and NT) coding and UNIX coding, and you may feel free to check out the various Windows DDK examples and the Linux drivers to verify.

In UNIX a driver does not nuke the system unless there are no alternatives, and a great deal of effort is made to make sure this doesn't happen. In Windows land all the DDK examples in both DOS Windows and NT, upon encountering a problem, they are encouraged to issue a stop. (similar to a kernel panic) rather than fail to operate.

This means that a printer driver, in kernel space, will kill a system rather than fail to print. So, with Windows, you will lose work, with Linux, you will be able to save your work.

This isn't to say Linux is perfect, it does have bugs, all software does, but it has fewer and is more tollerant of them.

Re:Linux is BUGGY so it IS about TIME ! (1)

Skuto (171945) | more than 8 years ago | (#15292398)

>In UNIX a driver does not nuke the system unless there are no alternatives,
>and a great deal of effort is made to make sure this doesn't happen. In
>Windows land all the DDK examples in both DOS Windows and NT, upon
>encountering a problem, they are encouraged to issue a stop. (similar to a
>kernel panic) rather than fail to operate.

Uhm, it's not as if the Linux kernel won't PANIC (or was it BUG, there are multiple, IIRC) if there's a problem. Your statement has no base in reality, pure FUD.

Re:Linux is BUGGY so it IS about TIME ! (1)

mlwmohawk (801821) | more than 8 years ago | (#15292472)

If you read carefully and verify this yourself, it is not untrue. It may cause FUD about Windows, but unlike Microsoft's "Get The Facts" nonsense, this FUD is justified.

FUD -- Fear, Uncertainty, and Doubt, like paranoia, is sometimes the correct response to the facts. Like Kissinger said, "Just because you're paranoid, doesn't mean they're not out to get you." In this case, just because it causes FUD, doesn't mean its a lie.

Re:Linux is BUGGY so it IS about TIME ! (1)

Architect_sasyr (938685) | more than 8 years ago | (#15292446)

I hate to say it mate, but BULLSHIT. I work with Windows 2003 Terminal Servers. Let me tell you, those things crash for no apparant reason. Device drivers are good, can confirm that by looking at the manufacturers of the hardware alone.

Verifiable information: Sure, the passwords to access my VPN are... fuck off. Noone on /. is that stupid. The proof I give you is my word. Deal with that.

M$ ISA Server = Crash from 200+ users. Reboot it maybe 8 times per week. Linux Proxy = (Squid) = 300+ users (we grew) 88 days and counting.

Meanwhile, we need to schedule reboots on a domain controller that is DEFAULT. If it didn't come with Server 2k3 Ent. it isn't on there. This thing will lock up, and randomly crash/reboot if we don't kick it once a week.

Love to be giving you logs, but unfortunately, I practice some form of security through obscurity...

Re:Linux is BUGGY so it IS about TIME ! (0)

Anonymous Coward | more than 8 years ago | (#15292616)

why are your drivers good? usually this is the cause of most crashes, I run 8 ISA servers wth just over 10,000 concurrent users, our uptime is counted in months not days, So you either have dodgy hardware or drivers or simply incompetant admins.

Re:Linux is BUGGY so it IS about TIME ! (0)

Anonymous Coward | more than 8 years ago | (#15292037)

and the wife hasn't been at the neighbors in a long while.

Is it because your 'panics' had scared them all away? Or is it that mobile tech support are doing her 'house calls' now?

question (1)

mapkinase (958129) | more than 8 years ago | (#15291977)

I thought since it is open source the bug level should be more or less constant. More bugs, more people willing to fix the bug, more fix submissions - problem solved?

I understand that the kernel code grows, it is natural for any software code, so does it mean that open source community is lacking developers now to handle existing corpus of open source code?

Re:question (2, Insightful)

tomstdenis (446163) | more than 8 years ago | (#15292004)

Part of the problem is experience. Projects like GCC and the Kernel are split along two problems.

1. The underlying technology is non-trivial

2. The implementation often is dirty, quick and without consistent method.

In the case of #1 it's the case that the technology is not trivial. How many people understand paging?

In the case of #2 the code in many cases lacks comments, uses cryptic variables and the documentation [even doxygen style comments] are just not there.

Those two issues fight against anyone willing to throw in a weekend to help out.

Tom

Re:question (4, Insightful)

zootm (850416) | more than 8 years ago | (#15292028)

As the previous article pointed out, there's no lack of developers, just a lack of developer interest in fixing the bugs. Many of the larger contributors are paid by companies to ensure that specific features are put into (or at least developed for) the kernel. And let's face it; bug-fixing is not fun. Regardless of how hard-working the people are on average, bugfixing is generally the sort of thing that people shy away from unless the bugs directly affect them, especially when working voluntarily.

All large systems have a danger of bugs creeping in over time, and it can be easy to let their numbers get out of control as time goes on. The fact that the people in charge are point it out now is basically an example of good management — attempting to address a concern before it becomes more serious.

Re:question (1)

WillerZ (814133) | more than 8 years ago | (#15292051)

bug-fixing is not fun


Some people find bug-fixing fun. Well, not the fuxung so much as the hunt. You need to stalk your bug through the dense undergrowth of your source tree until you find its lair. Then you excise it with surgical precision.

At its best, bug-stalking can be more fun than development.

Re:question (1)

tadmas (770287) | more than 8 years ago | (#15292122)

bug-fixing is not fun
Some people find bug-fixing fun. Well, not the fuxung so much as the hunt.

Yeah, some people do -- I do most of the time. But in my experience it's the exception, not the rule. In general, I've found that people like fixing easy bugs, but the really bad heisenbugs and deep-rooted design bugs are a PITA. And these are a lot more common as the size of the codebase increases.

Re:question (1)

zootm (850416) | more than 8 years ago | (#15292179)

Yeah, but as the sibling post points out, this only applies to some bugs. With many bugs it's a long, infuriating process to find and fix them, and a lot of people just don't have the patience to do this.

Re:question (1)

tadmas (770287) | more than 8 years ago | (#15292084)

I thought since it is open source the bug level should be more or less constant. More bugs, more people willing to fix the bug, more fix submissions - problem solved?

This is a common misconception. In my experience, developers rarely want to fix bugs; it's often tedious to track down what is causing the bug, and the fixes can have a ripple effect where you end up creating yet more problems that you have to fix. It's much more fun to write some kewl new feature. Would you do something boring and tedious without getting paid? :)

Proponents of open source often claim that more eyes == fewer bugs, and this can be true for really obvious problems, but with respect to deep-rooted bugs I would expect it to be about the same. Proprietary vendors don't see it worth the money spent to fix every single bug, especially the rare ones -- you don't get much return on investment. Open-source generally won't fix everything either since it's tedious.

I work for a proprietary software company, and I don't look over everybody else's code except when I'm working on a particular module. Why would this be any different in open source? How many open source developers out there actively audit other people's code?

In general, bugginess is more a function of the quality of the developers (and the pace of development) rather than whether the project is open source.

Re:question (2, Informative)

CastrTroy (595695) | more than 8 years ago | (#15292314)

It's not necessarily the point that everyone is looking at the code, it's the fact that everyone is able to look at the code. How many times have to encountered a bug in MS software, or any closed software app, and wish that you could fix it yourself. Think of how many developers use windows on a dialy basis. I'm sure that if they had access, most of the bugs would be fixed by now, or at least, it wouldn't be as bad as it is. There's only a small percentage of developers who use open source software. Out of my graduating class of around 50(?) I think that maybe 5-10 of us knew about Linux, and maybe 5 of us used it on a regular basis. I know one guy who does open source programming. But it's not low level kernel stuff, just user apps. I think that as Linux starts being used by more large organizations, there will be many more people who are given the time to fix bugs. Just because it isn't your code, doesn't mean that it isn't your job to fix it. If a bug is plauging your job with problems, and you have the power to fix the bug, most likely you will.

How "more eyes == fewer bugs" works (1)

mangu (126918) | more than 8 years ago | (#15292524)

bugginess is more a function of the quality of the developers (and the pace of development) rather than whether the project is open source.


It's obvious that better developers should generate fewer bugs, but I think you don't get the point.


Open source has fewer bugs irregardless of overall developers quality, because it's the quality of the best developer that has access to the code that matters. It doesn't matter if 99.999% of the people who have access to the code are ignorant, or unwilling to debug it. It's enough that *one* competent developer gets motivated to fix that bug. The more people who have access to the code, the greater probability that among them will exist one competent and motivated programmer.

Re:How "more eyes == fewer bugs" works (1)

Skuto (171945) | more than 8 years ago | (#15292600)

>Open source has fewer bugs irregardless of overall developers quality, >because it's the quality of the best developer that has access to the code >that matters. It doesn't matter if 99.999% of the people who have access to >the code are ignorant, or unwilling to debug it. It's enough that *one* >competent developer gets motivated to fix that bug. The more people who have >access to the code, the greater probability that among them will exist one >competent and motivated programmer.

Which in the real world never ever happens, because there's many more times crap open source code coming out than there are good programmers.

Proof: Just look at 99% of the projects on sf.net.

Re:question (4, Interesting)

Vo0k (760020) | more than 8 years ago | (#15292095)

yep, the previous poster is right.

I thought I know C until I tried to fix a bug in the kernel.

It was a simple syntax bug. Somebody put xxx[...]->yyy instead of xxx->yyy[...] in one line, and the compiler was protesting about type mismatch. One single line. But it took me 4 hours or so and I figured out only what the correct syntax for that piece of code would be, by analysing types of the variables used. I have no idea if the fix really corrected the problem, it just made the line lexically correct and let the compiler go on. In the meantime I had to crawl about 4 levels of header files for each of the variables/records used in the line to reach primitive types of given variables and macros, from which the structures, pointers etc were derived, and generally was totally dizzy. And I was doing it the code monkey style, I didn't really understand the workings of the kernel, what was the line I edited meant. I was purely checking that a pointer to float isn't directly assigned a value of float, just pointer to it etc.

Kernel is too difficult for us average coders. Only the elite can fix these bugs for us.

Re:question (4, Interesting)

bhima (46039) | more than 8 years ago | (#15292189)

This is nearly the same as my own experience... which makes me enjoy using, in my case, OpenBSD. I use C professionally but it's an order of magnitude (or two) less complex than the Linux kernel. It's just amazing to me it all comes together despite how many people are working on it.

Back to the point what can spending some time and having a bug fixing cycle hurt? I don't see a downside...

Re:question (1, Insightful)

Anonymous Coward | more than 8 years ago | (#15292315)

floats in the kernel.....you must be mad!!!

Re:question (1)

Vo0k (760020) | more than 8 years ago | (#15292591)

'kay, there were no floats. I just said what I meant, compiler says type mismatch so I check if all types match.

Re:question (1)

Emil Brink (69213) | more than 8 years ago | (#15292454)

Ick, sounds painful. But ... floats in the kernel? I thought that was forbidden, but perhaps that rule has been relaxed?

Standardize the Kernel API!! (5, Interesting)

mlwmohawk (801821) | more than 8 years ago | (#15291982)

I have been using Linux since the early 1990's and I've been a software developer for almost 30 years. The one ting that concerns me, and I think this recent indictment is just a symptom of a larger problem.

The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.

This is where I believe Linux is stuck. Linus and the kernel team has to look at the various kernel APIs and standardize them with the next release.

Sorry guys, time to grow up. Linux *is* mainstream!

Re:Standardize the Kernel API!! (0, Informative)

Anonymous Coward | more than 8 years ago | (#15292033)

HI2U JEFF MURKEY WE KNOW IT'S YOU!!!

Seriously, for fuck's sake, the stable API argument has been hashed to death on linux-kernel at least twice in the past three years. The primary drivers for these arguments are almost always people who want to make GPL-incompatible (usually closed-source) kernel modules. Pretending it's about stability of the mainline kernel is even more dishonest than the usual arguments in favor of out-of-tree modules.

Re:Standardize the Kernel API!! (1)

Whiney Mac Fanboy (963289) | more than 8 years ago | (#15292053)

The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.

No - The kernel API (whilst not set in stone) is quite stable & doesn't change often. The kernel ABI on the other hand... well, changes alot.

But really - that only affects close source kernel modules - and why should the linux kernel team care about people who want to leverage the linux kernel without contributing their source code back.

Sorry guys, time to grow up. Linux *is* mainstream!

Time to grow up - pay for a kernel if you want it the way *you* want it.

Re:Standardize the Kernel API!! (4, Insightful)

cortana (588495) | more than 8 years ago | (#15292055)

Re:Standardize the Kernel API!! (2, Insightful)

xtracto (837672) | more than 8 years ago | (#15292204)

So, if you have a Linux kernel driver that is not in the main kernel
156 tree, what are you, a developer, supposed to do? Releasing a binary
157 driver for every different kernel version for every distribution is a
158 nightmare, and trying to keep up with an ever changing kernel interface
159 is also a rough job.
160
161 Simple, get your kernel driver into the main kernel tree (remember we
162 are talking about GPL released drivers here, if your code doesn't fall
163 under this category, good luck, you are on your own here, you leech

164


No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses. If nVidia or ATI or any other hardware manufacturer do not want to license their software as GPL it is their decision. The operating system MUST provide a standarized API.

Whoever agrees with this does not have the right to whine that X or Y company does not provides drivers and support for Linux. It is a design flaw IMNSHO.

Re:Standardize the Kernel API!! (1)

Nicolas MONNET (4727) | more than 8 years ago | (#15292284)

No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses. If nVidia or ATI or any other hardware manufacturer do not want to license their software as GPL it is their decision. The operating system MUST provide a standarized API.
If you really respected the GPL so much, you'd have read it. Binary kernel modules are forbidden by a strict interpretation of the GPL; kernel developpers have merely tolerated them. Notice the warning in dmesg when you insert nvidia.ko. (dmesg|grep taint)

Re:Standardize the Kernel API!! (2, Insightful)

IAmTheDave (746256) | more than 8 years ago | (#15292302)

No, this sucks, I respect the GPL and other open source licenses (BSD) as well as closed source licenses.

Agreed. Open source is a choice, and not chosing to OS a driver code package does not immediately or synonymously make a company evil. Most people want Linux to start playing in the same space as Windows (well, at least OSX) in terms of user numbers. This will never happen unless hardware vendors are allowed to create binary drivers for their products.

Look at the video card space - drivers can sometimes mean a 20% boost in performance. Allowing the competition to get a look at these drivers means that you don't have an awful lot of IP to keep the business profitable.

If anyone ever wants Linux to be more than a hobbyist desktop OS, it will have to allow for the use of binary drivers. It's too late to put it into a hardware lock-in cycle like OSX (which does allows binary drivers) - Linux on the desktop will have to run on comodity hardware, and so for anyone to ever consider it seriously, it will have to be allowed to play with whatever hardware I want to purchase - and in order to do that, it will have to play with binary drivers nicely.

My two cents (and parent poster's)... but pretty rooted in logic.

Re:Standardize the Kernel API!! (1)

CaptnMArk (9003) | more than 8 years ago | (#15292385)

But I'll take the card without that 20% and open source drivers any day.

Your +20% card will be -20% card in a year anyway.

Re:Standardize the Kernel API!! (4, Insightful)

just_another_sean (919159) | more than 8 years ago | (#15292332)

The operating system MUST provide a standarized [sic] API.

People who code free software MUST not do anything unless they feel like it. Sure some of them might get paid by Company X to develop Driver Y or Application Z but they do so on the shoulders of what's already been put in place by free software developers.

If Linus and the rest of the kernel developers decide at some point to provide an ABI that proprietary companies can use to build their drivers, all the while clinging to their dated business methodologies and obsession with "IP", then great, that's their choice. It might take a Herculean effort to get all those copyright holders to agree and do it but if they can then that's up to them.

Conversely, if they choose not to, they under no obligation to provide anything. Nobody on the kernel team, IMHO, ever got together and said "we need to start coding and provide some free software so companies with no interest in participating in the process can take our free software and make some money selling hardware". They do it for themselves, their friends and family, their community. Whether or not ATI and NVIDIA want to be a member of that community entitles them to exactly nothing.

Re:Standardize the Kernel API!! (3, Interesting)

LordOfTheNoobs (949080) | more than 8 years ago | (#15292506)

Point 1 - Your post contradicts its own supposed respect of the GPL.
Point 2 - Linux is FSF free, share and share alike by license. BSD is not. You can't generalize them together on this issue. If you don't get the difference, you don't know what the hell your talking about.
Point 3 - The operating system doesn't _have_ to do shit. If the companies want their shit to run in Linux, the should submit GPL'd drivers or suffer their rightful hell for being miserly with their code in a project based on sharing. To hell with them.
Point 4 - There is a fairly standard API. And when they change it they fix the GPL drivers. There is not an ABI, `application binary interface' since you obviously don't know, which is not require or desired as Linux runs on many different types of hardware. Should we instead suffer to create an ABI for each hardware platform that each driver must uphold? There is more than x86 out there. Hell, even in x86, should we make all drivers have to suffer to a 16 bit driver interface, or create different ABIs for 32 and 64 and the future 256 and 1024 bit systems?
Point 5 - Cry more noob.
Point 6 - If a hardware manufacturer wants to sell their hardware to us, they will either suffer intolerably or they will give in and release some GPL'd code. If coders want it bad enough, someone will reverse engineer it and create free code on their own. It's not like we're going to start using our DVD-Rs to burn off graphics cards. Well, not until my chinese GFX-RW comes in anyway.

Re:Standardize the Kernel API!! (0)

Anonymous Coward | more than 8 years ago | (#15292229)

click [alpage.ath.cx]

Re:Standardize the Kernel API!! (2, Insightful)

mlwmohawk (801821) | more than 8 years ago | (#15292263)

I have read this piece before, and while I think it is very good, it and I both agree that a "binary interface" is a bad idea. I am not suggesting that at all. I am suggesting that, as part of the kernel, define a stable API.

Look at the current APIs, augment or "bless them."
Don't access structures, use macros.
Bless tried and true interfaces, and make damn sure no one changes them without keeping backward compatibility.
Assign temporary status to "experimental" interfaces.

Maybe create a synthetic API layer analogous to Windows NDIS sort of thing, where common peripherals can just code to that and be done. That way, the vast majority of simple devices will just come along for the ride.

There are lots of steps that can be taken. At issue is a fact of life people ignore: The strategies and skills used to attain success, are not the same as those needed to maintain and continue succeeding.

To use a marathon metaphor, Linux is no longer sprinting to catch up, we are in the game. As such, we need to recognze and understand we can't sprint forever, we need to settle down and pace ourselves, this is a long race, and the winner will be the one that plans ahead.

When the Linux kernel was small, changes could be made to the whole source tree easily. As it gets larger and larger, one obscure change in one section of the kernel may not generate an error or even a warning, but may break a driver you didn't even know about. That is exactly what we are seeing.

Linux is no longer a small and simple kernel.

Re:Standardize the Kernel API!! (5, Insightful)

John_Sauter (595980) | more than 8 years ago | (#15292078)

As a software developer whose experience goes back more than 40 years, to the Stanford Time-Sharing System on the DEC PDP-1, I can assure you that the only way to keep the kernel API from changing is to kill the project. Just as you wouldn't expect a driver written for Microsoft's MS-DOS to be effective on a modern NUMA machine, you shouldn't expect any driver interface standardized today to be effective 10 or 20 years from now. An attempt to freeze the driver API would hamstring the kernel developers, making the kernel less interesting to work on. Somebody would fork it, to lift the compatibility restriction, and the new kernel would work much better with modern computers, causing everyone to migrate to it.

The only way to keep Linux relavent it to let it evolve. Yes, that creates a burden on driver writers. Linux has a partial solution: keep your drivers in the kernel source tree, and test each kernel to be sure your driver still works. When it breaks the cause should be obvious, and easily fixed. If you are lucky, the person who changed the API will also update your driver, but you can't count on that, which is why you must test.

Re:Standardize the Kernel API!! (2, Insightful)

oliverthered (187439) | more than 8 years ago | (#15292222)

I can assure you that the only way to keep the kernel API from changing is to kill the project.

You don't have to stop the API chaging, you just have to stop it changing all of the time. Doing that also give you the added benifit that third party vendors don't keep pulling their hair out because the kernel API keeps changing so they may be more included to actually release drivers in the first place.

Re:Standardize the Kernel API!! (1)

tonigonenstein (912347) | more than 8 years ago | (#15292292)

Just as you wouldn't expect a driver written for Microsoft's MS-DOS to be effective on a modern NUMA machine, you shouldn't expect any driver interface standardized today to be effective 10 or 20 years from now
You are right. It is not realistic to cast the API in stone. But what we could imagine is fixing an API for 2.2, 2.4, 2.6, 2.8, ... and then provide in every new major release optional compatibility wrappers for the old APIs (if we can write ndiswrapper, this is certainly possible).
That way either you are commited to work on a driver and update it to the new API or (if it is older and for less used hardware) you keep it as it was in the previous version and rest assured it will work.
This remove the burden to fix old drivers not many people are interested in and doesn't force the ones who are interested to stick with an old kernel that may be unusable because it doesn't support necessary new features.

Re:Standardize the Kernel API!! (1)

mlwmohawk (801821) | more than 8 years ago | (#15292293)

OK, nice strawman. Yes, you are right, it has to change yada yada yada.

At issue is the frequency of change. Think about the C file I/O API. It has not changed in almost 30 years. Why not? Because it didn't need too. It was designed up front. Everything under it changed, of course, but it didn't.

It is not impossible to standardize certain APIs and continue to grow. That's the point of an API it is an "interface," not an "implementation. "

Re:Standardize the Kernel API!! (1)

tomstdenis (446163) | more than 8 years ago | (#15292316)

What about loadable module support? What if you want to add a probe detection callback, etc, etc.

There are many reasons to change the module interface format. Many of which include the ability to do things we take for granted today.

What the kernel really lacks is a good standard for coding practices, like say adding comments and indenting at least somewhat sensibly [yeah I know for some of you "elites" you can take reading a complete lack of consistent indentation but for the rest of us ...]

Tom

Re:Standardize the Kernel API!! (3, Informative)

EvilGrin666 (457869) | more than 8 years ago | (#15292400)

What the kernel really lacks is a good standard for coding practices, like say adding comments and indenting at least somewhat sensibly [yeah I know for some of you "elites" you can take reading a complete lack of consistent indentation but for the rest of us ...]

The kernel includes a document detailing the coding style to use. It lives in Documentation/CodingStyle.txt You can read the current version from Linus' Git tree here [kernel.org] . If you spot anything in the kernel that doesn't follow CodeingStyle.txt you should submit a patch to the kernel janitors to fix it up.

Re:Standardize the Kernel API!! (4, Insightful)

MROD (101561) | more than 8 years ago | (#15292321)

I disagree.. mostly.

There needs to be a stable API for drivers PER MAJOR RELEASE so that the driver maintainers can keep stable, well tested and debugged drivers.

The API should be allowed to change with every major kernel revision but any change should be made with a great deal of thought and, unless it's very difficult to do, the old API should be supported for backward compatability.

Not only this, but I would argue that it would be good hygene to separate the core kernel from the drivers. Doing this would make developers think hard about the bounderies between the two and not have one polluting the other. It would also make the developers think long and hard about whether changing the API for something is such a good idea just because it would be useful for the "ACME USB SLi Graphics card programming port widget" interface.

The the kernel is the kernel, the drivers are merely plug-ins to virtualise the hardware, the two should be as separate and distinct as they are logically.

Re:Standardize the Kernel API!! (1)

DarkOx (621550) | more than 8 years ago | (#15292503)

I am with you except for supporting the old API. When a new major relase comes out it should not be backward compatible at the driver level. If it is we will just end up with a bunch of unmaintained drivers sitting about; which will lead to problems. The other thing is all that backward compatibility would add tons of cruft to the driver layer which would eventally just slow down development. People can wait for the drivers they need to be ported to the next major version, before they upgrade.

Re:Standardize the Kernel API!! (1)

diegocgteleline.es (653730) | more than 8 years ago | (#15292206)

The problem is that the drivers have to remain in constant flux because the kernel API is always changing

Well, this is just a consequence of what people is discussing. In the current 2.6 model you're allowed to merge new features (ie: things that break the kernel API). What you're proposing is to go back to the stable/unstable development model.

But I don't think the "changing kernel API" is the source of problems - if a driver doesn't work, it's because it doesn't have a good maintainer. Releases take around 2 months currently in the linux kernel, if a driver maintainer can't take care of a driver in a two months timeframe then the driver doesn't have a good maintainer. Is not that because the kernel api is not stable, driver maintainers need to rewrite drivers in every release, either; and when a "invasive" change is done to introduce a new feature, its not rare that the guy who breaks things and introduces the new feature is forced to port the drivers to the "new api" to get his changes in. But you can find tons of drivers that have not been touched deeply in more than 6 months. And the ones that have been patches recently have probably been patched because the maintainer was working in something related to the driver, not "trying to fix kernel API breakage"

Re:Standardize the Kernel API!! (2, Interesting)

ratboy666 (104074) | more than 8 years ago | (#15292402)

Thank you.

I would like to modify this slightly. I don't think a single DDI (device driver interface) will work, but several DDIs can be defined:

A low level SCSI DDI
A low level audio DDI
A low level network DDI

and maybe others. Factor the drivers, and extract common parts into the appropriate DDI.

Now, a vendor would write to that DDI, and the Linux team would have to promise that the defined DDI would have a lifespan of (?, but as long as possible). Any drivers needing a custom kernel interface would be planted into the source tree as is now done.

All drivers under the DDI can be checked for conformance. The DDIs would not have to be "officially" introduced until they are ready.

Putting fences like this into the kernel would be (in my opinion) a very good thing.

Ratboy.

Slow! (0)

Anonymous Coward | more than 8 years ago | (#15291996)

It's funny that not so long ago Andrew thought 2.4 development was too slow. That was one of the reasons for the versioning scheme change (the other beeing getting more people to test new stuff).

I think all this means that the new versioning system is working too well.

Typical monolithic kernel problem (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#15292009)

Any kernel with upwards of 2.5 million lines of code is going to be incredibly buggy, perhaps it's time to rethink and go back to the microkernel [computer.org]

Re:Typical monolithic kernel problem (0)

Anonymous Coward | more than 8 years ago | (#15292142)

...so that we can have a buggier and slower operating system because writing efficient code is more challenging, due to intrinsic issues like message passing?

Re:Typical monolithic kernel problem (4, Insightful)

tadmas (770287) | more than 8 years ago | (#15292185)

Any kernel with upwards of 2.5 million lines of code is going to be incredibly buggy, perhaps it's time to rethink and go back to the microkernel

Splitting any software into external pieces is exactly the same as splitting the software into internal pieces. Microkernel is not the answer -- encapsulation is the answer.

Besides, converting the kernel will not get rid of the bugs; it will just make different ones. 2.5 million lines is a lot to rewrite, and any rewrite will lose all the bugfixes already in place [joelonsoftware.com] .

Re:Typical monolithic kernel problem (1)

Skuto (171945) | more than 8 years ago | (#15292416)

>Microkernel is not the answer -- encapsulation is the answer.

But encapsulation means stable interfaces. Not acceptable for the Linux crowd (GPL binary drivers blablalba).

Of course you are right though. One main reason why so many things get broken is that all drivers need a rewrite on each interface change.

Re:Typical monolithic kernel problem (1)

Anonymous MadCoe (613739) | more than 8 years ago | (#15292509)

Maybe Minix3 is an option.

Encapsulation is not the answer, that will just add more code, and therefor it will have more bugs (encapsulation needs an interface etc too).

I think a microkernel will indeed work better, be more reliable because it allows a limited group of people understand all relevant code of one component.

What you see happening in Linux is exactly Tanenbaum's point for a microkernel. But then again the man is often misunderstood and misquoted.

Re:Typical monolithic kernel problem (3, Insightful)

diegocgteleline.es (653730) | more than 8 years ago | (#15292280)

Any kernel with upwards of 2.5 million lines of code is going to be incredibly buggy

You mean that a microkernel is magically going to implement the same funcionality than linux, with all the thousand of driver, with its support for docens of hardware platforms, in less of 2.5 millions of lines of code?

Sure, a "microkernel" itself doesn't takes a lot of code. But BECAUSE it's a microkernel, drivers, filesystems, networks tacks etc. need to be implemented as servers. Implementing servers that implement the same funcionality than linux has today would take more of 2.5 milliones of lines, for sure. And those servers can have bugs, you know. And hardware bugs exist - it's completely possible (too easy, in fact) to hang your machine by touching the wrong registers no matter if you're using a microkernel or not.

Also, I don't understand why a microkernel would be magically more maintainable than a monolithic kernel. As far as I know, software design is something that doesn't depends in whether you pass messages or not. Sure, a server running in userspace can't take the system down. But that's completely unrelated to modularity and mainteinability. Microkernels were in fact invented because people though that hardware complexity wouldn't allow to continue running monolithic kernels, ignoring the fact that it's perfectly possible to write a mainteinable monolithic kernel with modular design - which is how Linux, Solaris internals etc. are today - just like it's completely possible to write a unmainteinable, non-modular microkernel. It all boils down to software design. And guess what: Current general-purpose monolithic kernels (linux, *BSD, Solaris, NT, Mac OS X - no, a operative system that implement drivers, filesystems and network stacks in kernel space it's not a microkernel) have had a lot of time and resources ($$$) to become mainteinable and modular, extensible, etc.

It's fun how when a monolithic kernel has a bug it means microkernels are better, like a microkernel model magically makes coders bug-free, or like it's not possible to write a microkernel server with a bad API that forces all driver developers to patch their drivers to fix a security bug. I'd love to hear what development model would use the Hurd/QNX/whatever guys to maintain six millions of lines of code, be it driver for a monolithic kernel or drivers implemented as microkernel servers.

Re:Typical monolithic kernel problem (0)

Anonymous Coward | more than 8 years ago | (#15292313)

Fantastic idea. Why don't you write one! Or somebody. Write a microkernel kernel! This is great. We are waiting. Yawn!

Re:Typical monolithic kernel problem (2, Interesting)

Jessta (666101) | more than 8 years ago | (#15292335)

That will probably be the future. We are not there yet. The Message passing overhead is still large and makes coding difficult. eg. HURD is still not finished after 20 years.

Eventually with multi-core cpus with stupid amounts of threads the micro-kernel will make it's come back.

Extermination cycle (1)

digitaldc (879047) | more than 8 years ago | (#15292017)

"Kernel developers will need to reapportion their time and spend more time fixing bugs. We may possibly have a bug-fix-only kernel cycle, which is purely for fixing up long-standing bugs."

Sounds like this approach will benefit everyone in the long run, instead of constantly playing catch-up later?

lol (-1, Flamebait)

DrSkwid (118965) | more than 8 years ago | (#15292020)

sed -e 's/security/bugs/g' -e 's/Windows/Linux/g' -e 's/innovation/imitation/g'

Lack of standards + arrogance (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15292074)

That is in my opinion the most important flaw within the OSS community, the thing in which many commercial approaches excell but that is something most people don't wish to hear because "commercial = bad". It would help to keep an open mind and look and learn before you judge, but I guess that is also something not many people of us are capable of.

I can't help wonder how long this comedy has to continue before people will finally realize that some things are best adopted instead of mocked. In my opinion this kernel incident is no different. "Homeostasis and Transisstasis". One is the power for change while the other is the power for maintaining the same. Unrelated psycho mumbo jumbo I hear you think? Well, what about that marvelous piece of work (you probably won't find a distro without it) the BerkelyDB ? Its almost as if every released version is incompatible with the previous one. If you don't believe me try installing SquidGuard [squidguard.org] with the current BerkeleyDB. Or simply stop and wonder why your distribution keeps several versions of the same product around.

This is but a single program, now what about some strict standards? SuSE tried to introduce some standard with regards to administration (Yast being in control; changes should be done through yast all the time, even overruling manual changes) making it possibly a very solid basis for your average workstation. Like being able to administrate and roll out standards through the use of AD. Or what about Java? The so called "free java" is also breaking standards. Ofcourse these free tools were needed because you can't distribute Java. Really, is that so? When I look at the license all I see is that they don't allow you to ship additional software which will replaces parts of the environment. So distributing JRE and kaffe wouldn't be allowed. But what is the use of kaffe if you have the original?

And now the same issue is manifistating itself in the kernel development itself. Change after change and no one seems to care about setting certain standards. Even the well appreciated previous standard on seperation between stable and unstable has been thrown in the bucket for no other reason than "we don't feel like it anymore". And this is exactly the thing which makes OSS unreliable in the eyes of many.

I'm not condemning this perse since it is but a hobby (although people sure don't like to profile it this way anymore) but I do think people should realize this before they start barking up other people's legs for trying to maintain certain standards, enforced if need to. "Everything should be free?", maybe that is a noble cause but throwing smoothly working things in the pools of chaos and cheering that it is now free while the product itself as it once was is utterly destroyed isn't always the best way.

So in the case of the kernel development I'd say setup some (new?) standards and this time STICK with them instead of dropping whatever you build simply because you don't feel like it anymore. Only then will you last a whole lot longer and will it even survive when Linus stops caring about all this. But in any other situation I foresee Linux exploding (splintering into many factions and ideas) due to several people trying to oppose several "standards" because they simply "don't feeel like it".

Re:Lack of standards + arrogance (1)

xtracto (837672) | more than 8 years ago | (#15292235)

...standards and this time STICK with them instead of dropping whatever you build simply because you don't feel like it anymore.

The main problem with open source software is that the basis of its development is Because I feel like it, when someone gets bored of doing it they just leave it there. If there is something boring that needs to be done, then it won't be done (like documentation).

Some of the wonders of open source ;-)

and yeah I know this is the wrong place to post this :)

Re:Lack of standards + arrogance (0)

Anonymous Coward | more than 8 years ago | (#15292515)

But if u set some standards there is plenty of room for "I don't feel like it nomore", simply because the next guy who will can easily pick things up again if he is familiar with the standards. It would seem that the only time certain ppl care about standards is when a company tries to set up and use an opensource license. Ofcourse then the only thing some ppl care about are their own standards, "nothing else matters" (ugh, I hated that song).

Dave Jones take on the story (4, Informative)

greppling (601175) | more than 8 years ago | (#15292199)

can be found in a post [livejournal.com] in his live journal. He reports that with every new kernel release, the number of kernel related bug reports in the Fedora bugzilla goes up substantially.

(Davej is a long time kernel hacker and currently the Fedora kernel maintainer.)

Good ol' Pat... (2, Interesting)

zenmojodaddy (754377) | more than 8 years ago | (#15292286)

Does this mean that everyone who has complained or criticised Slackware for sticking with the 2.4.* kernel for the sake of stability will apologise?

Not holding my breath or anything, but it might be nice.

Re:Good ol' Pat... (1)

tomstdenis (446163) | more than 8 years ago | (#15292300)

2.6.16 has had a lot of bug fixes [from 2.6.16.5 to the current version is pretty much fixes].

I've been running 2.6 since 2.6.10 [or so] without any significant problems or stability issues. x86_64 support was better initially with 2.6 than 2.4 as well.

Perhaps they should spend a few months fixing bugs but I wouldn't favour 2.4 over 2.6 any day.

Tom

OT shelleytherepublican.com (-1, Offtopic)

PhilHibbs (4537) | more than 8 years ago | (#15292334)

Off-topic yet still Linux related - does anyone know if this [shelleytherepublican.com] is a wind-up or a genuine nutter?

a fix is needed all right (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15292418)

yeah, the fix is called BSD. (open)

*runs*

i will say this (1)

FudRucker (866063) | more than 8 years ago | (#15292450)

i keep slackware-10.2 with the stock kernel-2.4.31 as my main os and keep an extra partition for playing and testing out other Linux distros, and the slackware is more smoother and stable than any of the disktros i tried with 2.6.xx series kernels...

i am sure the kernel dev people (Linus & co.) will iron any wrinkles out of the 2.6.xx series kernels soon :)

_i for one welcome our kernel hacking overlords...

Nothing wrong with this... (1)

ovit (246181) | more than 8 years ago | (#15292594)

In the old days, when the kernel guys were working on an odd numbered kernel they would try out all kinds of stuff... Then, when they had a big batch of cool stuff they would go after an even numbered release and spend all their time solidifying what they had just done... So they used to operate with a built in (and regular) period that allowed them to focus on bugs...

Since 2.6, they've changed all this. I believe we will see the occasional "bug fix" only release as this is really what the developers are used to... and it just works... In the old days, this would be a 2.7 release...

        td
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...