Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

2.6 Linux Kernel in Need of an Overhaul?

Zonk posted more than 8 years ago | from the get-it-right-this-time dept.

512

toadlife writes "ZDNet UK reports that Andrew Morton, the head maintainer of the Linux production kernel, is concerned about the amount of bugs in the 2.6 kernel. He is considering the possibility of dedicating an entire release cycle to fixing long standing bugs." From the article: "One problem is that few developers are motivated to work on bugs, according to Morton. This is particularly a problem for bugs that affect old computers or peripherals, as kernel developers working for corporations don't tend to care about out-of-date hardware, he said. Nowadays, many kernel developers are employed by IT companies, such as hardware manufacturers, which can cause problems as they can mainly be motivated by self-interest."

cancel ×

512 comments

Sorry! There are no comments related to the filter you selected.

Important for the Old Debate (5, Insightful)

eldavojohn (898314) | more than 8 years ago | (#15276440)

A lot of times, the old debate of Windows Vs Linux covers how often the OS fails miserably. Yes, we all know the famous "blue screen of death" and I think that that single concept connected with Windows makes it unappealing. I believe that Linux has the ability to handle internal errors more elegantly but that's only because I've only seen it fail from hardware errors. Granted, I don't know enough about the inner workings of Windows or Linux but let's face it, Win95 & Win98 first editions would crash if you looked at them wrong.

Here's a possible horror story:

While the debate rages on, Linux gets more complex. Linux gains more bugs. Linux begins to aim for more end-user features. Developers get sick of maintaining other developers code and focus on making new features (asked for or un-asked for) because it gives them pride to make something new. The Linux kernel hits the same pitfalls as the Windows kernel.

If it takes an entire developement cycle to simply improve the current version's bugs, I'd gladly accept and encourage that.

Re:Important for the Old Debate (3, Interesting)

slashjunkie (800216) | more than 8 years ago | (#15276478)

This is why projects like Debian's kFreeBSD are important - so that if the Linux kernel does run amok completely (or at least degenerate significantly), we can continue using various Linux ~distributions~ - just with a different engine room.

When most people think "Linux", they're usually thinking of all the commonly associated open source software bundled as a distribution. If you can swap out the kernel but keep the same GUI, the same traditional *nix shells and CLI tools they're used to - will people really care that much?

OTOH, maybe it's time Linux looked at going microkernel. Maintaining a monolithic kernel that just keeps getting bigger and bigger can't be an easy task. At least a microkernel would allow a degree of separation between the kernel proper, and the various hardware drivers, networking stacks, filesystems etc.

Re:Important for the Old Debate (4, Funny)

trewornan (608722) | more than 8 years ago | (#15276578)

Hurd is currently due for release shortly after Duke Nukem Forever.

Re:Important for the Old Debate (5, Insightful)

MOtisBeard (693145) | more than 8 years ago | (#15276479)

"If it takes an entire developement cycle to simply improve the current version's bugs, I'd gladly accept and encourage that."

Hear, hear. The real pitfall for any technical production process, from software to space shuttles, is the ascendancy of a businesslike concern with the product's image to the point that it begins to dictate release deadlines. It's all well and good to worry about image, but when that worry becomes such a focus that it dictates the way that technical work gets handled, suddenly your product or process has become an example of form over function... and unless your product is tuxedoes for corpses or something similar, SCREW form over function!

Good luck with human resource allocation. (2, Insightful)

donscarletti (569232) | more than 8 years ago | (#15276481)

If a free software developer doesn't want to do something, like fix old bugs, they won't. If this is made to be the only way they can contribute, they probably won't contribute. It's better to get some shiny, unasked for features than nothing at all in my opinion, even if it is not as good as added stability.

The Ability to Lead? (5, Insightful)

eldavojohn (898314) | more than 8 years ago | (#15276509)

Man, it's crazy but we have this thing where I work. Uh, what do you call those things again?

They are very good at convincing people to do things regardless of what they get out of it ... I think they're called 'leaders.'

If Andrew Morton doesn't have leadership skills, I suggest he step down and let another manager step up.

If I were in his position, I'd get everyone who's even mildly important in a room (or, failing that, an e-mail) and:

"Guys, remember back to the reason you first joined in the contribution to develop a free operating system. Now, think of all the hard work you've put into it and other people have put into it. Now, that's all in jeopardy and here's why..."

Spend some time reasoning with them and pointing out the bugs that are really really hurting the kernel. In the end, wrap up with:

"Look, I know this sucks and you're going to have to tangle with a lot of bugs that aren't even your own. But what have got if we haven't got a stable operating system? We've got another Windows, that's what. You just don't have to pay for our piece of malware. Just see this one development cycle through, I promise we'll make it as quick and painless as possible and after all is said and done, we'll have another meeting like this were anyone can suggest any crazy-ass feature they want to add. Once we pick out what we want, we'll spend the next development cycle letting our imaginations run wild. We'll make a kernel so unstable that the user'll have to re-flash their BIOS when it crashes! Then maybe we'll work on solidifying that. Right now, we just owe it to ourselves and our fans to give them something that's 100% stable and reliable."

If you can't reason with them like that, maybe you just have to accept they can't be persuaded and let them do what they want but prune their work if it detracts from your goal end system.

Re:The Ability to Lead? (5, Insightful)

chrismear (535657) | more than 8 years ago | (#15276559)

Man, it's crazy but we have this thing where I work. Uh, what do you call those things again?

Paychecks?

Re:The Ability to Lead? And what else? (2, Insightful)

leonbrooks (8043) | more than 8 years ago | (#15276713)

Mr Moreton is a very, very wise and informed gentleman, and "leadership" skills aren't the only useful ones -- in fact, they can easily become crippling handicaps as every rational response is knifed in favour of a justifiable "leadership response", effective or not.

If you see a need for a leadership character, please engage them in addition rather than in place, else Linux will overall lose even if a relative genius is so employed. There is much comment in this post WRT Linux vs Microsoft development models; if y'all want to be as vulnerable and inconsistent as Microsoft products are well-reknowned to be, then you will need to make many of the same mistakes are they are making, and the-leader-wins-all is their primary fubar.

Re:Good luck with human resource allocation. (1)

Bralkein (685733) | more than 8 years ago | (#15276547)

I don't think that's universally true. Maybe for some people, yes, but lots of free software developers are very proud of their creations, and will strive to make their products as bug-free as possible. Also, the kernel development has leadership, and if they say "Look, features are all very good, but we have so many bugs here that we just have to get them fixed before we can add anything else", then people who want to get their feature included as soon as possible will work to get rid of the bugs so addition of features can resume.

Re:Important for the Old Debate (3, Insightful)

Homology (639438) | more than 8 years ago | (#15276483)

If it takes an entire developement cycle to simply improve the current version's bugs, I'd gladly accept and encourage that.

This is just treating the symptons rather than the cause: bad development with focus on features and performance instead of quality and correct code.

Re:Important for the Old Debate (5, Informative)

TheRaven64 (641858) | more than 8 years ago | (#15276529)

Linux got me using *NIX. BSD showed me how *NIX is meant to work. I currently use OpenBSD and FreeBSD, and this is exactly the kind of reason why I switched.

In FreeBSD, there are three branches, -STABLE, -CURRENT, and -RELEASE. Any new features are put into -CURRENT. Here, they undergo testing. The only people who should be running -CURRENT are those who are developing or actively bug-hunting. Once a feature is stabilised, it migrates into -STABLE. Here, it receives more general testing. A lot of people use -STABLE, and file bug reports. Finally, a -RELEASE branch is created from -STABLE. This undergoes even more testing and is then shipped (usually after several betas and RCs). The -RELEASE branch is maintained in the tree, but only bug fixes are allowed to go in it. If you want a stable system, you stick with a -RELEASE branch. For a slightly less-critical system, you might want -STABLE for the features (my ThinkPad runs -STABLE, and I have never yet had it crash).

The direction of the OS development is driven by the core team. These are elected annually by the developers.

In the OpenBSD world, there is a code review process. Every piece of code in the base system is audited on a regular basis. When a new category of bug is discovered (e.g. the multiply overflow that caused a security hole in OpenSSH), the entire source tree is searched for occurrences of that bug. These are then fixed.

Both of these development processes give high-quality, stable systems.

Re:Important for the Old Debate (0)

Anonymous Coward | more than 8 years ago | (#15276691)

So, you'd like to see the Debian devs work on the kernel instead of the current situation. I guess we'd get a new, stable kernel every one and a half years.

Re:Important for the Old Debate (0)

Anonymous Coward | more than 8 years ago | (#15276672)

This is just treating the symptons rather than the cause: bad development with focus on features and performance instead of quality and correct code.

Wait. Am I to understand you correctly? Let me see here...

Desktop usability drives GUI -> initial Windows offering from DOS.
Plug and Play configurability/hardware manufacturers -> Win95.
Multimedia/Games -> Win98
Server/Workstation offerings -> WinNT/2000
...et cetera (with many market influences to fill in various gaps along the way)

2.6 kernel -> various distributed Server environments

* Dear OSS Ideologue class candidates, welcome to Capitalism/Free Market 101. The Linux children are finally reaching puberty!

Sincerely,

Prof. Aytolh D. Yewso
C.T.O./Chief Security Officer
Microsoft Corporation
One Microsoft Way
Redmond, WA 98052-6399

Re:Important for the Old Debate (4, Interesting)

gmack (197796) | more than 8 years ago | (#15276484)

As someone who is resbposible for many of those bug reports I can tell you it's not the fetures that break things. It's things like driver API cleanups that don't get all of the older drivers.

The result is that if you have reasonably common hardware the kernel is getting much more stable but for things like my non PCI sparc(compile problem with some options) or my 21 ethernet port firewall (needs special options to boot or it crashes) it has gotten more buggy.

I'm not sure a freeze will do much to fix it as a large part of the problem is that all these somewhat rare things need testing.

I still find these things get fixed rather quickly when I report them even without the freeze.

Re:Important for the Old Debate (5, Interesting)

Rosco P. Coltrane (209368) | more than 8 years ago | (#15276489)

Yes, we all know the famous "blue screen of death" and I think that that single concept connected with Windows makes it unappealing. [...] Win95 & Win98 first editions would crash if you looked at them wrong.

Er.. I hate Windows as much as the next guy, but really, when was the last time you saw Windows bluescreen? Perhaps you could make your point by comparing Windows and Linux versions that aren't 11 years apart.

I believe that Linux has the ability to handle internal errors more elegantly but that's only because I've only seen it fail from hardware errors.

Yes but it handles hardware errors gracefully too: for example, one of my 24/7 machines's hard-disk died last week. I came back and found out that I couldn't write anything to it at. A quick look at the console showed a message saying "root filesystem, too many errors, remounting read-only" or something like that. The result is that data corruption was minimal *AND* the machine didn't hang. How's that for graceful? You wouldn't dream of having that in Windows.

Only Two Things Are Certain: Death & Win32's B (0, Troll)

eldavojohn (898314) | more than 8 years ago | (#15276522)

Er.. I hate Windows as much as the next guy, but really, when was the last time you saw Windows bluescreen? Perhaps you could make your point by comparing Windows and Linux versions that aren't 11 years apart.
Windows XP. Windows muthaf*ckin' XP Professional is installed on a machine right next to me (I am at work) and the man who uses that machine constantly (5-6 times a day) deals with a BSOD. It ain't physical (I've run scandisk, memtest86, various tools, etc.) and it is only when he logs on (maybe a user profile error?) but neither I nor the sys admin can track it down.

No one knows what's wrong with that machine. Well, I do--it has a Windows operating system installed on it. Installing Windows is a gamble, it always was and it always will be.

Re:Only Two Things Are Certain: Death & Win32' (1)

jb.hl.com (782137) | more than 8 years ago | (#15276723)

As a counterpoint, the last time I had a BSOD on XP, my hard disk had failed (or at least was in the process of failing).

Anything else?

Re:Important for the Old Debate (0)

Anonymous Coward | more than 8 years ago | (#15276542)

Last Thursday, XPSP2. Broken OS-supplied driver.

Re:Important for the Old Debate (2, Informative)

Anonymous Coward | more than 8 years ago | (#15276546)

The last time I saw windows repeatedly bluescreen due to a software error was Word XP on Windows 2000:

Insert floppy #1 with foo.doc
Open my computer
Double click foo.doc to open it in word.
Remove floppy #1, insert floppy #2.
Press save.

What happens next:
1) Nearly every single time, the original foo.doc file on floppy #1 will be rendered unreadable. (This actually happened the moment they pulled floppy #1 out)
2) The vast majority of the time the computer will BSOD
3) Occasionally, the computer will corrupt floppy #2. I've seen the floppy be reported as unformatted a number of times, and once I've even seen floppy #2 end up with a copy of the directory structure of floppy #1, but not the actual files.

I no longer work in the university labs where the wails of students who have lost all their work echo from the walls on a daily basis, but I suspect that Word 2003 on windows XP will still kill your word document, though it may not BSOD or scramble floppy disks. I base this on the fact that Word STILL creates the temporary file on the same directory as the original file (despite C:\windows\temp, or the personal Application Data folder in 2k and up.) and will shit all over your word doc should you open it from a network share that becomes temporarially unavailable.

Incidentally, I have personally reported this as a bug in every version of word from 4 to 2000. After that, they made the bug-reporting system too difficult to use (read: I wasn't bored enough to bother searching their site for and filling out all their forms)

Re:Important for the Old Debate (1)

_tognus (903491) | more than 8 years ago | (#15276611)

Two days ago on XP. A fault caused by Microsoft's own updates caused the Windows Logon Process to crash and burn, resulting in BSOD. I did have a link specifiying which updates had to be uninstalled via Recovery Console, but I've not got it now.

Otherwise, all the BSOD's I've seen on XP are faulty third party drivers.

Re:Important for the Old Debate (1)

ginotech (816751) | more than 8 years ago | (#15276678)

Wow, my friend is having that problem i think. Do you think you could try to find that link again?

Re:Important for the Old Debate (1)

shish (588640) | more than 8 years ago | (#15276617)

Er.. I hate Windows as much as the next guy, but really, when was the last time you saw Windows bluescreen?

Twice last week, windows XP, after being installed for a month and only having basic desktop software installed (firefox, MS office, etc). After the second it refused to load at all (it would get to the graphical "starting windows" then silently reboot), requiring a reinstall. Strangely enough, win98 had been reinstall-free for 2 years before I put XP on (although it did need rebooting at least once per day...). Memtest and scandisk showed no errors~ And to think I'd been so instistant that the family switch from 98 to XP because it's so much more stable (and hence requires less family tech support)...

In comparison, my linux server has been up for 4 years (if you ignore power cuts, which I don't expect any OS to deal with...)

Re:Important for the Old Debate (1)

init100 (915886) | more than 8 years ago | (#15276637)

Perhaps you could make your point by comparing Windows and Linux versions that aren't 11 years apart.

Some people do it the other way around [shelleytherepublican.com] . This is a comparison between Windows (Server?) 2003 and Red Hat Linux 3.0.

Be warned though, you're up for some truly hilarious reading, including that Microsoft invented the modern computer, the Internet and the World Wide Web. :)

Re:Important for the Old Debate (1)

jb.hl.com (782137) | more than 8 years ago | (#15276736)

Haha, that article is fantastic troll material...

Saw one recently (0, Offtopic)

asv108 (141455) | more than 8 years ago | (#15276650)

About a month ago this blue screen [alexvalentine.org] appeared on a Windows 2003 Server Appliance Edition NAS BOX. Attempt to mount an NFS share served by the windows NAS, and boom blue screen. This is a commercial NAS box [aberdeeninc.com] running a supposidly ultra stable version of windows for such devices. I would have picked the linux powered box but it wasn't my choice.

Re:Important for the Old Debate (2, Informative)

MooUK (905450) | more than 8 years ago | (#15276654)

I had an XP SP2 bluescreen a few times in the past few months. I wasn't quite sure what caused it any time.

Re:Important for the Old Debate (1)

bod1988 (925911) | more than 8 years ago | (#15276668)

"Yes but it handles hardware errors gracefully too: for example, one of my 24/7 machines's hard-disk died last week. I came back and found out that I couldn't write anything to it at. A quick look at the console showed a message saying "root filesystem, too many errors, remounting read-only" or something like that. The result is that data corruption was minimal *AND* the machine didn't hang. How's that for graceful? You wouldn't dream of having that in Windows."

Possibly not, but do you see Joe average, or his mother understanding what in the hell that means?

Re:Important for the Old Debate (2, Informative)

Al Al Cool J (234559) | more than 8 years ago | (#15276724)

when was the last time you saw Windows bluescreen

I use Windows once in a blue moon. The last time was last week. New landlady's Win 2K, she turned it on, we waited for it to finish booting, she closed her MS chat client, I clicked on control panel, and it froze. I waited several minutes, gave it the three finger salute, got a blue screen and it was totally hung. Had to hard reset.

Not XP I admit, but it's a pretty sad state of affairs when control panel hangs a freshly booted system.

Why are you discussing Windows? (0)

Anonymous Coward | more than 8 years ago | (#15276502)

Neither the article summary nor the article itself mentioned Windows. The article is about bugs in the Linux kernel, yet you use the first post to launch into a 'debate' about Linux vs. Windows, and state that despite all its failings, Linux still 'fails' better then Windows. if Linux is so much better than Windows, why do you (as in the Slashdot readership) feel the need to discuss and compare Windows at every possible chance, even when the article itself has nothing to do with Windows?

Re:Why are you discussing Windows? (1)

bod1988 (925911) | more than 8 years ago | (#15276562)

This is the case with most of the Linux user base, they seem to have this extreme insecurity, and tend to go apeshit when you criticize their software, they also continously have to keep asserting that Linux is bettar!!1 the windows.

Re:Important for the Old Debate (1)

SigILL (6475) | more than 8 years ago | (#15276505)

If it takes an entire developement cycle to simply improve the current version's bugs, I'd gladly accept and encourage that.

I don't think one mere development cycle is going to be enough. Code improvement is a continuous process. The Linux kernel programmers could (and should) learn a lot from how the OpenBSD team works [openbsd.org] .

I've written a Linux kernel driver in the 2.2 days, and at least back then the kernel source was rather messy (I've heard it's been much improved since then). One problem the Linux kernel has is that subsystems are almost continuously replaced with something new. The old subsystem code is then allowed to rot. Back in the 2.2 days my problem was to find the appropriate way to handle locking, whereas nowadays the problem area is probably the VM.

What would really help the code quality of the Linux kernel is to start refactoring subsystem code and throwing out the old stuff that oughtn't be used anymore. Less code means less space to hide bugs in.

Anyway, that's my E. 0,02.

Re:Important for the Old Debate (1)

gmack (197796) | more than 8 years ago | (#15276610)

What would really help the code quality of the Linux kernel is to start refactoring subsystem code and throwing out the old stuff that oughtn't be used anymore. Less code means less space to hide bugs in.

Wow! what an insight.. You just desciribed exactly what is happening. The problem is that in this refactoringy device drivers develop bugs that need to be fixed. The problem is that the kernel developers can't possibly test every driver and now they have to wait for bug reports.

Free time (1, Funny)

Anonymous Coward | more than 8 years ago | (#15276455)

"This is particularly a problem for bugs that affect old computers or peripherals, as kernel developers working for corporations don't tend to care about out-of-date hardware, he said. Nowadays, many kernel developers are employed by IT companies, such as hardware manufacturers, which can cause problems as they can mainly be motivated by self-interest.""

I suggest we have some more unemployed kernel developers to correct this problem.

About time (1)

jrumney (197329) | more than 8 years ago | (#15276457)

Its about time this was recognized by the Linux developers. Every time I've tried to upgrade from 2.4.26 over the past few years, my system has become unstable and I've ended up reverting. Hopefully I'll be able to upgrade at last.

Re:About time (4, Interesting)

Blue Booger (223698) | more than 8 years ago | (#15276496)

Agreed. I have been forced to upgrade to 2.6 on a few computers for features that I needed that are only in the 2.6 series, but everytime it has been a problem. All of our production machines are still built with 2.4 and we purposely use hardware that is supported by the 2.4 series.

Linux has caused Microsoft to improve their products, and I have found myself removing Linux servers to replace them with Windows 2003 Server of late. On the desktop, it is not even close. I sit next to a guy who runs 2.6 on his Ubuntu machine and I laugh everytime he has to reboot. My Windows XP box only goes down rarely for updates and it does it at night when I am not there. Last time, I had over 100 days of uptime (this is a desktop machine). I rarely ever see the BSOD anymore and if I do it is almost always caused by a hardware problem. That is what I *USED* to be able to count on with Linux - if it crashed, there was a hardware issue. Now, with 2.6, I've lost that.

There are coworkers of mine who would have fainted three years ago if they heard me say something like this, but Linux just isn't the lean, reliable operating system it used to be.

Re:About time (2, Interesting)

Homology (639438) | more than 8 years ago | (#15276507)

My Windows XP box only goes down rarely for updates and it does it at night when I am not there. Last time, I had over 100 days of uptime (this is a desktop machine).

You don't do Windows Update very often, do you?

There are coworkers of mine who would have fainted three years ago if they heard me say something like this, but Linux just isn't the lean, reliable operating system it used to be.

Use something that cares more about quality than new features, like the *BSD.

Re:About time (0)

Anonymous Coward | more than 8 years ago | (#15276745)

That whole post is so far from reality, it's hilarious! Rebooting Ubuntu was a nice touch and the lack of 'why your coworker needed to reboot ubuntu', is a nice hint that it is far from reality. Applying updates alone on a windows machine forces reboots. Let alone alot of applications (poorly desined? Mabe...). Replacing linux machines with windows machines? Either you are gonna have to work late, or please post the name of the comany you work for. I'm sure alot of people here would like to send over a resume...

Hmmm... (1)

corychristison (951993) | more than 8 years ago | (#15276458)

On this computer I am running 2.6.5-7.201-default [on SUSE 9.1 Personal -- installed near 3 years ago] I don't seem to be experiencing any problems. Only problem I ever really had was back within the first month after installing. CD-RW seemed to want to crash every time I tried to burn anything. Simple upgrade fixed that problem...

Although I am not very fluent when it comes to kernel development [read: don't have a clue] I really don't care what they decide to do. So far it works fine for my needs. :-) Have no intentions of ever going back to the big W. Ever.

Within the next few weeks here, I will be converting this system to Gentoo... we'll see what problems may be around the corner. :-)

Been saying this for years. (0)

Anonymous Coward | more than 8 years ago | (#15276463)

I guess Andrew has come to the same conclusion.
I'm happy about that!

Here's a thought:
Old hardware is going to become more important as Intel/AMD work to shut out non-Windows OSes.

OT: unencumbered hardware (2, Interesting)

Anonymous Coward | more than 8 years ago | (#15276487)

There may come a time very soon where a project will form to develop an unencumbered, DRM free, computer system. Perhaps using an embedded CPU or even discrete components, if necessary. Doesn't seem so outrageous these days, does it?

Re:OT: unencumbered hardware (0)

Anonymous Coward | more than 8 years ago | (#15276501)

There is already an open-source hardware project, and
I believe Sun just made the Sparc open somehow.
But if companies start using encrypted interconnects
and requiring hardware keys to do things, this will
require that freedom-minded people create their own
chips... and that's expensive.

Re:OT: unencumbered hardware (0)

Anonymous Coward | more than 8 years ago | (#15276558)

like I said, discrete components, off the shelf TTL chips, or perhaps a parallel architecture using older eight or sixteen bit chips. It would mean the CPU would be a plug-in board rather than a simple chip, similar to old style backplane hardware like the old S-100 bus or the DEC HEX/QUAD bus. Such a computer might be a few orders of magnitude slower than a typical off the shelf PC, but at least it would be unencumbered. And it would bring hobbyist hardware hacking back to the forefront.

Duh Factor (5, Insightful)

Spazmania (174582) | more than 8 years ago | (#15276469)

One problem is that few developers are motivated to work on bugs

Yeah, this is one for the "no shit sherlock" column. What did you expect to happen when you eliminated the stable/unstable cycle? At a minimum the individual parts of the kernel would achieve stability at different times so that the kernel as a whole was never stable.

This frustrates me immensely at work. I hung on to 2.4 as long as I could. Hardware compatibility pushed me to 2.6 and it just isn't as reliable.

Re:Duh Factor (0)

Anonymous Coward | more than 8 years ago | (#15276588)

I'm still running 2.4 with Sarge on our servers for the same reason. Desktops get Scientific Linux 3.0.5 (RHEL 3 rebuild). I'm still leery of 2.6 even though there's pent-up demand among the user community for a more recent desktop linux release.

Rewrite it as a microkernel!! (5, Interesting)

borgheron (172546) | more than 8 years ago | (#15276470)

This may look like flamebait, but I'm actually serious. Microkernels are more reliable because of drivers running on userspace. If a driver crashes, it can't take down the whole system. Also, given that some microkernels are only about 3500-6000 lines of code (as opposed to Linux's million or so) it's relatively easy to make certain that the code is bug free (given that the average number of bugs is 16 bugs per 1000 lines of code according to some recent studies).

So, if the kernel needs an overhaul, the why not do it right this time? Now some may say that microkernels have a performance hit, but todays machines are certainly fast enough to render any performance hit negligible.

GJC

Re:Rewrite it as a microkernel!! (1)

Loconut1389 (455297) | more than 8 years ago | (#15276491)

Let's say for a second its only a 1% hit: 1% of 200 Mhz = 2 Mhz wasted, 1% of 2.4 Ghz = 24.58 Mhz wasted.
More realistically lets say 10%: 20 Mhz and 245.76 Mhz. Do you really want to waste 245.76 * # Instructions per cycle?

There are arguments in both directions, but there's no reason that they can't make a non-microkernel stable, it just takes more time. If you can save the 245 mhz, why not?

Re:Rewrite it as a microkernel!! (1)

Blue Booger (223698) | more than 8 years ago | (#15276513)

How many cycles do you waste downloading drivers/modules that you will never use? That Linux kernel is getting mighty big and it is losing a lot of the features that made it desireable in the first place. For instance, the ability to run on older cheaper hardware. I have friends that run Linux because they don't have the money to buy Windows. They certainly don't have the money for broadband access. The kernel download is now impossible for them.

Re:Rewrite it as a microkernel!! (0)

Anonymous Coward | more than 8 years ago | (#15276580)

Try downloading an older version of the kernel, like 2.4.whatever, or a distro, they still work, and most software will still run on it.

Re:Rewrite it as a microkernel!! (2, Informative)

kv9 (697238) | more than 8 years ago | (#15276748)

I have friends that run Linux because they don't have the money to buy Windows. They certainly don't have the money for broadband access. The kernel download is now impossible for them.

i'm sorry if i'm being blunt, but get the fuck outta here. the latest 2.6 kernel (as of this writing) is ~39MB. that means you can get it in ~7h at 14k and ~2h at 56k. you can surely leave a wget running while you sleep or you're away from home.

when i was on dialup my monthly traffic was in the multiple gigs, so you'll really have to come up with a better excuse.

Re:Rewrite it as a microkernel!! (1)

TummyX (84871) | more than 8 years ago | (#15276679)

Let's say for a second its only a 1% hit: 1% of 200 Mhz = 2 Mhz wasted, 1% of 2.4 Ghz = 24.58 Mhz wasted.

In reality the overhead from microkernels isn't proportional like that. You may need say, 200 instructions vs 20 instructions to invoke some system call but that 200 will, for the most part remain constant. Most of the "meat" will happen strictly in user space or strictly in kernel space (the method you're calling) and as processers get faster, the overhead of microkernels becomes increasingly negligible.

Re:Rewrite it as a microkernel!! (2, Interesting)

Watson Ladd (955755) | more than 8 years ago | (#15276497)

Or use Darwin/x86. It's a microkernel that's already ready for prime time.

Re:Rewrite it as a microkernel!! (1)

udippel (562132) | more than 8 years ago | (#15276556)

Hmm. Looks like I wanted to start a flame, but I fail to have experienced the 'prime time' you talk about; and I have tried and tried.

Could you be a bit more specific what you mean ? Maybe we have different ideas about the meaning of the expression ?

Major Audit Time? (1)

tarballedtux (770160) | more than 8 years ago | (#15276510)

Maybe this is the time for a group of kernel developers to go over the code looking carefully for those long standing bugs and more important look for security problems. Do what OpenBSD did back during the 2.0-2.1 release. Even if it takes much longer (6months - 1 year) to make a new release wouldn't that be for the better. If that time period is too long then at least start to do the major audits and start putting them in on a 4 week release cycle. Granted a complete audit will probably take one or two years.

The security and stability of the Linux kernel could be greatly increased. Isn't that what we all want?

Re:Major Audit Time? (1)

Homology (639438) | more than 8 years ago | (#15276564)

Maybe this is the time for a group of kernel developers to go over the code looking carefully for those long standing bugs and more important look for security problems. Do what OpenBSD did back during the 2.0-2.1 release. Even if it takes much longer (6months - 1 year) to make a new release wouldn't that be for the better.

Note that for OpenBSD this part of their development model: They are still doing it. However, it's much less effort now than when they did it the first time.

Rewrites are often an awful idea. (0)

Anonymous Coward | more than 8 years ago | (#15276554)

Although the kernel in its current state may have some problems, much of the code also has had years of testing and works very well. That goes for most large pieces of software. While progress should not be hampered by an old codebase, it is also often a very imprudent idea to throw out years of well-tested code.

Of couse, that's ignoring all the difficult work and the time it'd take for such a rewrite.

Speaking of rewrites, would you consider rewriting Gorm? Actually, a more comparable (size-wise) rewrite would be Gorm, all of GNUstep, and X11. Frankly, I'd have to hope that you wouldn't support throwing away a lot of good code that works, just to fix a few minor issues.

Re:Rewrite it as a microkernel!! (1)

baadger (764884) | more than 8 years ago | (#15276586)

The crux of this article/interview is Linux 2.6 needs improving with old and/or less common hardware rigs. Your proposed extended solution is to radically refactor the entire kernel because, hey, modern hardware that the majority of us have can cope with it.

How the smeg did you pull that off, get modded insightful, and not get modded somewhat offtopic?

In any case moving to a new fundamental architecture is like turning grape juice into wine. There are some good wines and some bad wines, and while it's certainly possible to make a good wine from good juice, the problem is no matter how you produce your wine you're going to exclude those under legal drinking age, or rather, in this case, introduce alot of new barriers to those not in the mainstream.

Re:Rewrite it as a microkernel!! (1)

shish (588640) | more than 8 years ago | (#15276669)

How about just moving the most userspace-friendly bits to userspace? FUSE has allowed the development of a ton of kernel-level features (eg, read and write wikipedia entries using any program you like, by editing .txt files in ~/wiki/), while leaving the kernel itself as stable as ever; and without needing a complete rewrite.

Re:Rewrite it as a microkernel!! (2, Insightful)

Tolkien (664315) | more than 8 years ago | (#15276716)

Ahhh, but a good developer doesn't use increased CPU speed as an excuse to write slow code. Ideally; the faster the CPU, the faster the code runs. NOT: The faster the CPU, the "less slow" the code runs. In terms of CPU-hungry code, whatever "CPU-hungry" is defined as, depends on the task at hand.

Drawing the line (3, Insightful)

Digital Dharma (673185) | more than 8 years ago | (#15276472)

I think at some point you need to draw the line regarding support for older hardware and peripherals. I mean, excessive backwards compatability has retarded advancement of the industry IMHO.

modg Eup (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15276475)

an3 the Bazaar And as BSD sinks

Fix it yourself (0)

Anonymous Coward | more than 8 years ago | (#15276477)

In the words of many an open source advocate, fix it yourself. You don't need to rely on anybody else. You have the source yourself. Quit whining.

Re:Fix it yourself (0)

Anonymous Coward | more than 8 years ago | (#15276631)

What an incredibly odd comment. Who is it directed at? Linux kernel developer Andrew Morton? You know, the guy who is always fixing it himself?

Is this connected to the removal of... (1)

djsmiley (752149) | more than 8 years ago | (#15276480)

Anyone whos been playing with the newest kernels might of noticed that the option to compile drivers not expected to compile cleanly has been removed from the kernel

This is from memory, hehe i think thats hte right option.

A friend and I noticed this and at first thought it was a bug in the kernel (2.6.16?) but appently linus has "hidden" it so that only the devs can use it as he belives they need cleaning up more.

Im wondering if the two are connected...

Re:Is this connected to the removal of... (0)

Anonymous Coward | more than 8 years ago | (#15276667)

Just an FYI, it's "might've" from "might have", not "might of."

Let's not forget... (1, Flamebait)

Spy der Mann (805235) | more than 8 years ago | (#15276482)

that the reason IE failed miserably is because the authors eventually STOPPED fixing bugs. We don't want Linux to take the same road, do we?

Re:Let's not forget... (1)

Rosco P. Coltrane (209368) | more than 8 years ago | (#15276499)

that the reason IE failed miserably is because the authors eventually STOPPED fixing bugs.

No. Firstly, IE hasn't failed (yet), unless you aren't talking about commercial failure. Secondly, it sucks because ever since Microsoft licensed/stole Mosaic, they kept throwing features at it with no concern at all for security.

The "amount" of bugs? (1, Insightful)

bcmm (768152) | more than 8 years ago | (#15276493)

The number of bugs. "Bug" is a countable noun, like "chair", not an uncountable noun, like "salt".

Re:The "amount" of bugs? (0)

Anonymous Coward | more than 8 years ago | (#15276577)

Even shit is quantized.

Re:The "amount" of bugs? (0)

Anonymous Coward | more than 8 years ago | (#15276604)

Wow.

Thanks so much for clearing that up - I really had NO IDEA what they meant.

Re:The "amount" of bugs? (0)

Anonymous Coward | more than 8 years ago | (#15276706)

Why not log in to say this, fag?

Re:The "amount" of bugs? (0)

Anonymous Coward | more than 8 years ago | (#15276725)

Hello pot, meet kettle.

Fantastic and Overdue (5, Insightful)

udippel (562132) | more than 8 years ago | (#15276500)

So, there are two relevant aspects to it. Probably more.
The 2.6 Kernel has been plagued by bad bugs. On the other hand, one way or another you need it for a multimedia-enabled desktop on more modern hardware (compared to 2.4). From that point of view, the proposal is fantastic. Otherwise we see the quality of the kernel of our beloved OS going down.
2.6 has never seen a phase of consolidation, really. Therefore, the proposal is almost overdue.

It would be badly short-sighted to think of quick ROI (as the IT companies usually aspire), since the troubles only multiply with further advances.

Yes, please, Andrew, get stability back into 2.6 - Though I have no single word of say in this, I thrust up both hands in favour !

Maybe there are some thumb-screws needed for the contributors: As long as the bug level stands above a certain threshold, no enhancements will be accepted.

There is also a political aspect to it: we have always argued about re-use of legacy hardware. This becomes even more important with Vista on the horizon. The kernel must not lose the 'caring' attitude. It must be trustworthy and trusted by the general public to care for more than greedy hardware manufacturers and their sick quest to replace functional hardware with most recent hardware.

Re:Fantastic and Overdue (0)

Anonymous Coward | more than 8 years ago | (#15276572)

Discipline and focus are hard to achieve with a group of volunteers, but ironically it is probably even harder when corporate interests become involved. There is a perceived (and IMHO very real) quality-gap developing between Linux and the BSDs (esp. OpenBSD, and their fanatical committment to fixing bugs), and regular, focused bug-crushing would go a long way in closing the gap (at least in the kernel). If the kernel maintainers have to do something like freeze the tree for new features then so be it -- I still think it is a fabulous idea.

Doh (1)

sammyo (166904) | more than 8 years ago | (#15276506)

"Developers motivated by self interest"...? Isn't it
amazing what radical subversive thought can slip
though the open source ff (man -k filosofy filter).

What is the need for backwards compatibility anyway?
The Dosification of Linux?

Anyway, why not have a rarely updated, minimal branch
for ancient hardware, like anything over 3 years old?

Re:Doh (1)

benoitg (302050) | more than 8 years ago | (#15276605)

Why not? Because for most of the planet 3 years old hardware is pretty much top of the line, and for the developped world in non-corporate environement 3 year old hardware is the typical installed base.

Re:Doh (0)

Anonymous Coward | more than 8 years ago | (#15276715)

sammyo, you just got served

so here we are ..... (4, Insightful)

nblender (741424) | more than 8 years ago | (#15276519)

Linux got off the ground and started incorporating everything anyone contributed... grabbing features and drivers like there was no tomorrow. NetBSD was rejecting stuff because it wasn't written right. So it took ages for NetBSD to get audio until someone did it right; while everyone else went with OSS. Over and Over this happened. NetBSD was criticized for being useless because it didn't support all the stuff Linux/FreeBSD did.

Nice house. Did you build it yourself?

Re:so here we are ..... (0)

Anonymous Coward | more than 8 years ago | (#15276673)

"So it took ages for NetBSD to get audio until someone did it right"

Where I come from "right" doesn't mean "crams lots of userspace functionality into the kernel running at ring 0" and nor does it mean "several kernel DOS problems, and full privilege escalation caused by school boy errors".

Nice house, how much did those cowboy builders charge you for their "professional" job?

Silly Andrew, Open Source Projects don't have bugs (-1)

Anonymous Coward | more than 8 years ago | (#15276526)

they don't have bugs because The Community fixes them!

"With Enough Eyes All Bugs Become Shallow"
- Linus Torvalds

And there are many many Linux users, who love Linux because it has no bugs and it is Free Open Source Software.

Too little / too late for me. Adios! (2, Insightful)

Anonymous Coward | more than 8 years ago | (#15276541)

In a way Linux as a whole (the kernel) is now suffering from the same problems as Debian stable once was, at least from my perspective. Do you guys remember the previous Debian stable? It remained stable for such a long time that eventually you simply needed websites like Backports [backports.org] to be able and run some current software since everything included with Debian was way ancient. Naturally you could run Unstable but it wasn't exactly the best approach for servers. I eventually ended up running Testing and keeping a close eye open for bug reports, exploits, etc. while not updating the box every time something new came out.

And that is what I see happening here as well. The last really stable kernel is IMO 2.4.32. There are no new features being added, only bugs being fixed. Which is IMO exactly what is needed for serious usage since every beginner programmer knows that when you add new features to your software you will also increase the risk of more bugs popping up. These could be bugs resulting in the addition of new code and the way it cooperates with the existing code, or simply bugs which only manifistate themselves in the new routines. Unfortunatly the kernel developers don't see this or they don't care resulting in a rather stable kernel 2.4.32 which unfortunatly lacks some hardware support and certain features when compared with the rather unstable 2.6.x kernel branch.

Personally I'm worried about the future. When looking at the 2.6.x kernel I don't like what I see. When looking at the current Debian Sid and the rather rough way they implemented the new X environment I also can't help wonder if there aren't more bugs than "usual" popping up. /usr/X11R6, so why couldn't they add /usr/X11R7 ?

Anyway, this is all moot now since I have lost fait in Linux all together when it comes to server usage for quite some time. I think that Linux is suffering from its own success and it may well proof fatal in the end, although I really hope it doesn't. I still enjoy running Linux on my workstation and I'm not planning to stop. But when it comes to serious work, like my server, my trust is now put into Sun Solaris 10. While Solaris is also moving into the Open Source environment Sun still uses their common sense and as such split their software into 3 parts: Open Source [opensolaris.org] , Unstable [sun.com] and Stable [sun.com] .

Face it (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15276543)

Linux sucks.

That is all.

Unstable kernel API (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15276544)

10 year old drivers written for Windows NT 3.1 still run on Windows Server 2003 today. The Windows kernel API is stable, and Microsoft puts a lot of effort into maintaining backwards compatibility. Windows has many faults -- this is something it does right.

10 day old drivers written for Linux may need tweaks as soon as the next kernel is released. The Linux kernel API is a moving target. The freedom to improve the kernel API with each release has benefits, but it also has costs. Nobody should act surprised that old Linux drivers without active maintainers are failing on new kernels.

It takes a change of mindset to get it done (5, Insightful)

Opportunist (166417) | more than 8 years ago | (#15276548)

Of course it's more rewarding to create a new feature. First of all, no coder enjoys working on foreign code. It just doesn't "look right", doesn't "feel right", simply because everyone has his own style.

And don't forget bragging rights. Hey, I invented some feature. Sure, some guy debugged it, but I get to slap the label to it. I might even name it after me (Hello Mr. Reiser, if you should read this...). The guy who debugs it gets ... zip.

This has to change first if we want people to put in time to hack through other people's code. Appreciate the work done to get it fixed. After all, appreciation, bragging rights and "making a name" is everything you get from writing free software.

Few people do it out of generosity or because it "feels good". They want to be known. Linus might not have gotten much out of writing that Kernel, but he sure as hell has a killer paying job now. I doubt the people who wrote the original implementation of iptables/ipchains are worse off. But the debuggers? Lot of work, no name.

Pull the debuggers in front of the curtain, and you'll see people debug. If we only appreciate the people who wrote a feature in the first place, even if that feature doesn't work 100%, we won't see people debug.

Re:It takes a change of mindset to get it done (1)

Homology (639438) | more than 8 years ago | (#15276582)

Of course it's more rewarding to create a new feature. First of all, no coder enjoys working on foreign code. It just doesn't "look right", doesn't "feel right", simply because everyone has his own style.

Of course, every developer has their own way of doing things, but an appropiate coding standard helps to read the code.

Few people do it out of generosity or because it "feels good".

Oh boy are you wrong, but then this must be alien concepts to you.

Re:It takes a change of mindset to get it done (1)

systemofadown (885733) | more than 8 years ago | (#15276674)

Actually Mr. Reiser may see this, if i recall I've seen him post before.

The I guess... (0)

Anonymous Coward | more than 8 years ago | (#15276587)

We should call the guys who write "features" Buggers. That should get them some respect for the Debuggers. I mean who wants to be called a Bugger?

Re:It takes a change of mindset to get it done (1)

no_good_nicknames_le (876557) | more than 8 years ago | (#15276608)

What about something similar to the competition Gnome had about a year ago? There was a bounty on features. I'm thinking the same thing, but for bugs in the kernel. Two issues with that:

1) Who would sponsor it
2) Providing a big list of bugs might serve as ammunition for Microsoft et al.

Just a thought. I love Linux and would like to see it continue to be great.

BFOD and Bragging Rights (5, Insightful)

martyb (196687) | more than 8 years ago | (#15276698)

Pull the debuggers in front of the curtain, and you'll see people debug. If we only appreciate the people who wrote a feature in the first place, even if that feature doesn't work 100%, we won't see people debug.

Here Here! Seti at home had a gazillon(tm) people contributing cycles to the effort (many times in teams) to see who could place highest on the list of contributors.

How about a BFoD - Best Fix Of the Day? Each day, post the name of the submitter and some details about the item debugged and fixed:

  1. Name Recognition Not just to see your name in "lights", but also gain something you could add to your resume.
  2. New Code - preference to bug fixers Make a policy that you will give top priority to bug fixes... if you attach your new feature to a bug fix, it will get preferential treatment. Those without a bug fix fall to the bottom of the queue.
  3. Share / Educate Share debugging techniques and tools. Make it easier to fix bugs by sharing best practices with the community.
  4. Scratch an Itch It may not be fun, but if you develop new code, you also get to spend time debugging... learning from the preceding item will speed the development process and you'll be able to complete your Next New Thing(tm) even faster and better!
  5. Competition Have contests for the Best Fix of the Month (BFoM) and Best Fix Of the Year (BFoY). To be chosen from the winners of the BFoDs.

This could be further improved by posting a Bug Of the Day (BoD) where there is a daily bug that is to be fixed. The first fixer gets recognized as well as anyone who provides an especially elegant solution. Award bonus points for fixing related bugs in the area so as to promote more complete fixing in that area.

Post these prominently for all to see and I'd be willing to bet that there would be a groundswell of support.

This is just off the top of my head - please post any suggestions for enhancements or (gasp) any problems you see in it!

Blaming corporate developers is a dodge (5, Insightful)

Shivetya (243324) | more than 8 years ago | (#15276563)

The painful truth is that very few developers, in open source or otherwise, like fixing old code or old bugs. This is very true if the bug fix isn't going to be noticed by a great number of people. Face it, most of us like to write new code or improve on something that isn't working the way we want it even if it is working right.

This is what separates professional developers from the rest. We work on it regardless of how much it benefits us. We might gripe a bit but in the end we do what is asked. Sure that backend has flaws and is going to be replaced down the road but it does not excuse us from making it work now.

When you go look at some of the bugs listed in even current applications you start to see the age some have accrued. Some are rightly passed over as 1 in a million occurences but too many are skipped because it just doesn't have any allure. Note, I am not singling out people who work on Open Source, I am pointing out that the article fails to touch an area that exist but most don't want to acknowledge.

Re:Blaming corporate developers is a dodge (1)

iggymanz (596061) | more than 8 years ago | (#15276579)

oh, let's talk about closed source companies who also don't fix bigs because it's not what's hot for the market. I'd say the problem was even worse in that realm

Yes, fix the bugs, BUT ... (5, Insightful)

njdj (458173) | more than 8 years ago | (#15276567)

entire release cycle to fixing long standing bugs

Yes, it's a good idea.

But don't waste time on bugs that only affect legacy hardware.

It would also be a good idea for some effort to be spent on consolidating, corrrecting, and updating the various lists of "Hardware supported by Linux". There are lots of such lists on the web, for example:

- not to mention the distro-specific compatible hardware lists maintained for Redhat [redhat.com] , Mandriva [mandriva.com] ,Suse [hardwaredb.suse.de] , and others.

We need one correct, maintained list, not dozens of nearly-correct, usually out-of-date lists. And it seems to me that the list should depend only on the kernel version, not on the distro.

Re:Yes, fix the bugs, BUT ... (1)

concept10 (877921) | more than 8 years ago | (#15276651)

I thought about how to approach this task last year, but I do not have enough knowlegde to complete the project. Ubuntu has a device database that collects trivial info after your install and reports it back to a web-based database.

There should be an application for all distros that reports back to one central db. Not only will we be able to collect tons of info, we could possibly find out a the installed user base of GNU/Linux.

2.6 'stable' no longer stable (4, Insightful)

prestwich (123353) | more than 8 years ago | (#15276596)

My experience is that stability is dropping, even on modern hardware. You can no longer take the latest '2.6' stable kernel and expect it to keep your server running stably.

Now, you can take a Redhat or SuSE packaged kernel and find those are pretty stable.
But there is a problem; if you report a bug in a Redhat/SuSE kernel on the lk.ml you get a
'that's Redhat/SuSE problem - speak to them'.

As the 2.6.x stable tree becomes less stable, less people use them on production servers and instead
use packaged kernels. As less people use them, they get tested less - and less bugs are actually reported for them.

It is also not just a case of old hardware; in the last few kernels I've had leaks that make
a simple firewall die repeatedly after a few months, I've got a machine with a slow RAM kernel leak
that makes a simple DHCP server fall over every few months, and I've had a 2.6.1x kernel that couldn't
run an NFS server for 24 hours without falling over.

It ain't nice - but these are my experiences.

Dave

But it runs Faster!! (5, Interesting)

giorgosts (920092) | more than 8 years ago | (#15276598)

I follow Ubuntu with the latest kernel updates and I tell you with every update performance increases.. .When I booted Windows I used to feel the difference, but not anymore. I think the quality of the kernel is fine. There other people that need to improve in quality, e.g. the rest of the free apps, esp packagers who have to make the thing to just work.. What will I do with stability if nothing works? Am I going to just look at the computer while its all stable doing nothing?

Outstanding bugs does not always mean instability (4, Interesting)

vijayiyer (728590) | more than 8 years ago | (#15276600)

Some of the above posts say "I don't notice any problems". I'm guessing some of the bugs nobody has fixed are somewhat obscure. There is a well known bug when Linux mounts large XFS file systems via NFS that bothered me regularly - large directories could not be searched, deleted, etc. Now I have a Mac working with that flawlessly. These are the types of bugs - annoying, but non-fatal - that few people want to fix.

I thought 2.6.16.x was going to be "stable" series (2)

Zarhan (415465) | more than 8 years ago | (#15276624)

At least so I thought, ie. once 2.6.17 is out, there will be a separate branch based on 2.6.16 (2.6.16.y, continuing past the current series) that would constitute as a "stable" branch where no new features would be added, and focus would be on fixing bugs and stability..

So, is the problem already solved?

about time (1)

cg0def (845906) | more than 8 years ago | (#15276647)

well they should finally fix outstanding bugs. Wasn't the whole point that OSS offers a faster responce time for bug fixes because so many people are looking at the code? I think that the comunity leaders should step in and let the big corporations know that while the door is open for anyone to contribute, there are some standards and principles. Otherwise you might see linux highjacked in a couple of years and that would be a damn shame. Oh and Linux is just the kernel and NOT some gui based software so pls stop with all the bs. I really though that this issue was cleared up by now but I guess there are always dumb people. One thing that I still can't belive that hasn't been fixed is tha fact that a poorly written application can lock up xorg which in tern can lock up the whole system. And unfortunatelly this goes for any process. The kernel need some kind of fail safe that works a little bit better than what is currently in place for cituations like this one. Relying on the software developer's skills is just asking for trouble. The problem though is that I haven't seen anything being done in that particular direction in a very long time. Maybe the linux developers should look at what's happening in some other *nix communities and maybe imulate.

Overhaulin' (1, Funny)

FrankDrebin (238464) | more than 8 years ago | (#15276680)

Day 1 - Andrew's kernel is "stolen" under strange circumstances.
Day 2 - Chip Foose draws up a design for the kernel, the coding commences.
Day 3 - Andrew receives a call from a "kernel repo man". Apparently his kernel was taken by mistake.
Day 4 - Coding like mad. Fake repo man stalls Andrew. The crew is running out of Doritos.
Day 5 - The pressure mounts. Token pretty face "helps out" by typing "make" in a staged moment of tension.
Day 6 - We're never gonna get the kernel done in time!!!! Oops and panic. Fake repo man stalls again.
Day 7 - The reveal. A shiny new kernel is handed back to Andrew. You've been OVERHAULED!

Great way to give the other side talking points (1)

MikeRT (947531) | more than 8 years ago | (#15276689)

A big part of what made OSS get off the ground outside of its core area was the belief that it'd lead to better bug fixes, delivered faster because of all of the eyeballs looking at the code. Well, that's a little hard to argue if 90% of those eyeballs are dedicated to looking at new things, not fixing outstanding issues.

Whew! (5, Funny)

cciRRus (889392) | more than 8 years ago | (#15276721)

So, there are lots of bugs in Linux! Good thing I'm using Windows.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?