Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Microkernel: The Comeback?

Hemos posted more than 8 years ago | from the time-to-hash-this-out-all-again dept.

722

bariswheel writes "In a paper co-authored by the Microkernel Maestro Andrew Tanenbaum, the fragility of modern kernels are addressed: "Current operating systems have two characteristics that make them unreliable and insecure: They are huge and they have very poor fault isolation. The Linux kernel has more than 2.5 million lines of code; the Windows XP kernel is more than twice as large." Consider this analogy: "Modern ships have multiple compartments within the hull; if one compartment springs a leak, only that one is flooded, not the entire hull. Current operating systems are like ships before compartmentalization was invented: Every leak can sink the ship." Clearly one argument here is security and reliability has surpassed performance in terms of priorities. Let's see if our good friend Linus chimes in here; hopefully we'll have ourselves another friendly conversation."

cancel ×

722 comments

Sorry! There are no comments related to the filter you selected.

Feh. (-1)

Fordiman (689627) | more than 8 years ago | (#15284895)

Tannenbaum has been spouting that business for the last twenty years. It holds no more true in practice today than it did when he started.

Re:Feh. (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15284959)

YEAH!

Why doesn't Tannenbaum write his OWN O/S following his examples, THEN we can talk! Minix DOESN'T COUNT! Frankly, Linux has been amazingly stable through most of its life, as have other UNIX variants/versions. I didn't see that with Minix.

The industry has better and more important things to worry about.

Re:Feh. (1, Informative)

Anonymous Coward | more than 8 years ago | (#15285026)

He has. It's called Amoeba. I haven't tried it myself though.

Re:Feh. (2, Insightful)

panthro (552708) | more than 8 years ago | (#15285189)

The industry has better and more important things to worry about.

Like what? Reliability and security ought to be paramount. The IT industry (relating to multipurpose computers, anyway) is currently a joke in that area - compare to virtually any other industry.

Re:Feh. (5, Interesting)

AKAImBatman (238306) | more than 8 years ago | (#15284962)

It holds no more true in practice today than it did when he started.

WRONG.

Tanenbaum's research is correct, in that a Microkernel architecture is more secure, easier to maintain, and just all around better. The problem is that early Microkernel architectures killed the concept back when most of the OSes we use today were being developed.

What was the key problem with these kernels? Performance. Mach (one of the more popular research OSes) incurred a huge cost in message passing as every message was checked for validity as it was sent. This wouldn't have been *so* bad, but it ended up worse because a variety of flaws in the Mach implementation. There was some attempt to address this in Mach 3, but the project eventually tappered off. Oddly, NeXT (and later Apple) picked up the Mach kernel and used it in their products. Performance was fixed partly through a series of hacks, and partly through raw horsepower.

Beyond that, you might want to read the rest of TFA. Tanenbaum goes over several other concepts that are hot at the moment, include Virtual Machines, Virtualization, and driver protection.

Re:Feh. (3, Informative)

Anonymous Coward | more than 8 years ago | (#15285181)

Performance was fixed partly through a series of hacks, and partly through raw horsepower.

O RLY [anandtech.com]

Re:Feh. (0)

Anonymous Coward | more than 8 years ago | (#15284980)

Tannenbaum has been spouting that business for the last twenty years. It holds no more just as much truth in practice today than as it did when he started.

Fixed it for you.

Not entirely accurate (3, Insightful)

WindBourne (631190) | more than 8 years ago | (#15285012)

Back in the 80's and 90's, the argument for monolithic was performance. Considering that CPUs were so small, it made sense. If Linux had been on a micro kernel design, it would have been slower than MS. IOW, it would never have gotten off the ground.
 
  The 2'nd approach(paravirtualization) could actually be used WRT linux as a means of not only separating the usermode from the device drivers, but it would also allow for some nice networking capabilities. After all, the average systems does not really need all the capabilities that is has. If a simple server(s) can be set up for the house and then multiple desktops without driver is set up, it simplifies life.

Eh hem. (3, Insightful)

suso (153703) | more than 8 years ago | (#15284896)

Current operating systems are like ships before compartmentalization was invented

Isn't SELinux kinda like compartmentalization of the OS?

Re:Eh hem. (3, Informative)

Anonymous Coward | more than 8 years ago | (#15284927)

SE Linux provides security models to compartmentalize your programs and applications and such. This is a completely different beast then compartmentalizing your individual kernel parts. Modules was kind of a primitive step in the direction of a microkernel, but still a long ways off from a technical standpoint.

Re:Eh hem. (1)

SatanicPuppy (611928) | more than 8 years ago | (#15285068)

Not really. SELinux is just a different type of security model, it doesn't have anything to do with the kernel architecture.

Re:Eh hem. (4, Funny)

Ohreally_factor (593551) | more than 8 years ago | (#15285155)

Ship analogies are confusing and a tool of the devil.

Could someone out this into an easy-to-understand car analogy, like the good Lord intended?

O Tanenbaum... (3, Funny)

ZombieRoboNinja (905329) | more than 8 years ago | (#15284902)

...I got nothing.

Didn't he give us Minix? (1, Funny)

Anonymous Coward | more than 8 years ago | (#15285010)

cat tongue >/dev/null

Amen (1)

Delifisek (190943) | more than 8 years ago | (#15284905)

hopefully we'll have ourselves another friendly conversation.
Amen my brohter, amen..

The unsinkable Kernel (5, Funny)

Random Destruction (866027) | more than 8 years ago | (#15284913)

So this microkernel is the unsinkable kernel?
FULL SPEED AHEAD!

Re:The unsinkable Kernel (1)

joe 155 (937621) | more than 8 years ago | (#15285049)

maybe it would be unsinkable if only it would ram the bloody problem head on instead of trying to turn and cutting right along the hull (or whatever kernels have in this titanic analogy...)

Re:The unsinkable Kernel (0)

Anonymous Coward | more than 8 years ago | (#15285132)

Ice anyone ?

Re:The unsinkable Kernel (1)

alexhs (877055) | more than 8 years ago | (#15285174)

Ice anyone ?

Good idea ! Pistachio [l4ka.org] icecream for me :)

How hard... (3, Interesting)

JonJ (907502) | more than 8 years ago | (#15284915)

Would it be to convert Linux to a microkernel? And Apple is using Mach and BSD as the kernel XNU, are they planning to make it a true microkernel? AFAIK it does some things in kernel space that makes it not a microkernel.

Re:How hard... (0)

Anonymous Coward | more than 8 years ago | (#15284966)

Pretty much impossible. With 2.5 million lines of code, the potential for new bugs, and the lengthy process involved, would make this more difficult than constructing a whole new operating system. Not to mention compatibility issues with current software...

In software engineering, you will find that once a program has been released, it's pretty much unthinkable to have any large scale paradigm shift in its design, without somehow re-making it.

Re:How hard... (1)

muhgcee (188154) | more than 8 years ago | (#15285007)

I'm no kernel hacker, but I would imagine that to "convert" Linux to a microkernel would entail rewriting Linux...ie, there would be no "Linux" when you were done "converting" it.

Re:How hard... (2, Funny)

cp.tar (871488) | more than 8 years ago | (#15285075)

Well, I hear that GNU/HURD is in the making...

Re:How hard... (1)

muhgcee (188154) | more than 8 years ago | (#15285104)

GNU/Hurd is downloadable and somewhat usable. Of course, not meant for production.

Hurd and one linux kernel question (0)

Anonymous Coward | more than 8 years ago | (#15285170)

What would it take to get that project kick started into wider acceptance and development? Or is it a steaming pile and not worth it, or what? I don't know much of anything about kernel innards at all, but if someone who does could run a quick synopsis over what is right or wrong (politics aside) with the Hurd I would appreciate it.

OK, second question to any maths gurus here. Extrapolate the size of the linux kernel ten years hence, based on original size and size of the major release numbers. Seems like a simple two axis graph could do it, but all we need is a raw number, graphs being rather hard to reproduce on a ./ post. X-million lines today, what will it be in ten years?

A false dichotomy (5, Insightful)

The Conductor (758639) | more than 8 years ago | (#15285081)

I seem to find this microkernel vs. monolithic argument a bit a of a false dichotomy. Micorkernels are just at one end of a modularity vs. $other_goal trade-off. There are a thousand steps in-between. So we see implementations (like the Amiga for example) that are almost microkernels, at which the purists shout objections (the Amiga permits interrupt handlers that bypass the OS-supplied services, for example). We also see utter kludges (Windows for example) improve their modularity as backwards compatibility and monopolizing marketing tactics permit (not much, but you have to say things have improved since Win3.1).

When viewed as a Platonic Ideal, a microkernel architechture is a useful way to think about an OS, but most real-world applications will have to make compromises for compatibility, performance, quirky hardware, schedule, marketing glitz, and so on. That's just the way it is.

In other words, I'd rather have a microkernel than a monolithic kernel, but I would rather have a monolithic kernel that does what I need (runs my software, runs on my hardware, runs fast) that a micokernel that sits in a lab. It is more realistic to ask for a kernel that is more microkernel-like, but still does what I need.

Re:How hard... (0)

Anonymous Coward | more than 8 years ago | (#15285085)

Apple did exactly this, in fact, before the NeXT acquisition became a reality. Their mach-microkernelled linux was called mkLinux, and it worked.

Metaphors eh? (1)

10Ghz (453478) | more than 8 years ago | (#15284922)

Well, if current kernels are "ships before compartmentalisation", then microkernels are something like car-engines that are broken down to tiny bits. Everything is a separate part, and it works just as well.

I wouldn't give much credit to metaphors. Even when they are said by Andrew "Broken Record" Tanenbaum.

Re:Metaphors eh? (0)

Anonymous Coward | more than 8 years ago | (#15285124)

Well no, microkernels are are like ships after compartmentalisation.

Or... (5, Funny)

Mr. Underbridge (666784) | more than 8 years ago | (#15284923)

You could just have a small monolithic kernel, and do as much as possible in userland.

Best of both worlds, no? Wow, I wish someone would make such an operating system...

Re:Or... (2, Interesting)

Zarhan (415465) | more than 8 years ago | (#15284974)

Considering how much stuff has recently been moved to userland in Linux (udev, hotplug, hal, FUSE (filesystems), etc) I think we're heading in that direction. SELinux is also something that could be considered "compartmentalized".

Re:Or... (1)

otis wildflower (4889) | more than 8 years ago | (#15285074)

Unfortunately, ASCII strips the sarcasm bit..

(I wonder if one of the UTF encodings restores it?)

Re:Or... (0)

Anonymous Coward | more than 8 years ago | (#15285076)

Man, I swear I actually heard the whooshing sound ...

Already Done (1, Funny)

Anonymous Coward | more than 8 years ago | (#15285054)

I think this has already been done in the past. It was called MS-DOS ;-)

We can dream. (1)

Jerk City Troll (661616) | more than 8 years ago | (#15285070)

If only. [gnu.org]

NT4 (2, Interesting)

truthsearch (249536) | more than 8 years ago | (#15284925)

NT4 had a microkernel whose sole purpose was object brokering. What I think we're missing today is a truely compartmentalized microkernel. The NT4 kernel handled all messages between kernel objects, but all it did was pass them along. One object running in kernel space could still bring down the rest. I assume that's still the basis of the XP kernel today.

I haven't looked at GNU/Hurd but I have yet to see a "proper" non-academic microkernel which lets one part fail while the rest remain.

Re:NT4 (4, Interesting)

segedunum (883035) | more than 8 years ago | (#15284964)

NT4 had a microkernel whose sole purpose was object brokering.

Well, I wouldn't call NT's kernel a microkernel in any way for the very reason that it was not truly compartmentalised and the house could still be brought very much down - quadruply so in the case of NT 4. You could call it a hybrid, but that's like saying someone is a little bit pregnant. You either are or you're not.

Re:NT4 (1, Informative)

Anonymous Coward | more than 8 years ago | (#15285119)

NT4 was the release where NT stopped pretending to have a micro-kernel architecure. Microsoft pulled a load of previously user-mode code (e.g. the graphics subsystem) into the kernel to improve performance.

The "cleanest" NT versions were NT 3.1, 3.5 and 3.51.

Re:NT4 (1)

Abcd1234 (188840) | more than 8 years ago | (#15285188)

And, coincidentally, 3.51 specifically was often lauded as the most stable of the NT series of releases...

Trusted Computing (2, Interesting)

SavedLinuXgeeK (769306) | more than 8 years ago | (#15284931)

Isn't this similar, in idea, to the Trusted Computing movement. It doesn't compartamentalize, but it does ensure integrity at all levels, so if one area is compromised, the nothing else is given the ability to run. That might be a better move, than the idea of compartamentalizing the kernel, as too many parts are interconnected. If my memory handler fails, or if my disk can't read, I have a serious problem, that sinks the ship, no matter what you do.

Oh Dear (1, Informative)

segedunum (883035) | more than 8 years ago | (#15284932)

Not again:

http://people.fluidsignal.com/~luferbu/misc/Linus_ vs_Tanenbaum.html [fluidsignal.com]

We've got Andy Tanenbaum coming up with nothing practical in the fifteen or sixteen years he's been promoting microkernels, and then turning around and telling us he was right all along. Meanwhile, the performance of OS X sucks like a Hoover, as we all knew:

http://sekhon.berkeley.edu/macosx/intel.html [berkeley.edu]

I'll just pretend I didn't see this article.

Re:Oh Dear (1)

AKAImBatman (238306) | more than 8 years ago | (#15285040)

Spouting or not, at least he's doing something [minix3.org] . Minix3 (the end point he gets to in the article) is a BSD licensed OS that implements the concepts he discussed. I think it's time to get out the ole' performance metrics and see if much has changed in 20 years.

Re:Oh Dear (1)

Rogerborg (306625) | more than 8 years ago | (#15285168)

I think it's just darling that after berating linux for being "tied fairly closely to the [weird] 80x86", that MINIX 3 is only available for... x86.

Re:Oh Dear (1)

truthsearch (249536) | more than 8 years ago | (#15285112)

the performance of OS X sucks like a Hoover

Um, those benchmarks are only for statistical computations. I don't know about you but most computer users aren't performing statistical analysis. Ask anyone who uses a Mac on a regular basis and they'll tell you it hums along nicely.

Re:Oh Dear (1)

daviddennis (10926) | more than 8 years ago | (#15285115)

I'm typing this on a MacOS X computer and of course the performance is fine for what I'm doing, and has been fine for more CPU-intensive tasks like video editing as well.

Perhaps Apple is trading speed for reliability, just as is being suggested?

And if so, any idea if it's worked - is MacOS X any more or less reliable than Linux? It's hard for me to tell since both my MacOS X and Linux systems have been very reliable.

D

Re:Oh Dear (4, Insightful)

igb (28052) | more than 8 years ago | (#15285118)

It's tempting for people who work in fields where performance matters to assume it matters for everyone, all the time. Do I need my big-iron Oracle boxes to be quick? Yes, I do, which is why they are Solaris boxes with all mod cons. Do I need the GUI on my desk to be pleasant to use? Yes, which is why it's increasingly a Mac that I turn to first. Sure, a G4 Mac Mini isn't quick. But there's a room full of Niagaras, Galaxies and 16-way Sparc machines to do `quick' for me.

All I ask is that the GUI is reasonably slick, the screen design doesn't actively give me hives and the mail application is pleasant. Performance? Within reason, I really couldn't care less.

ian

multicompartment isolation (2, Insightful)

maynard (3337) | more than 8 years ago | (#15284934)

didn't save the Titanic [wikipedia.org] . Every microkernel system I've seen has been terribly slow due to message passing overhead. While it may make marginal sense from a security standpoint to isolate drivers into userland processes, the upshot is that if a critical driver goes *poof!* the system still goes down.

Solution: better code management and testing.

Re:multicompartment isolation (4, Insightful)

LurkerXXX (667952) | more than 8 years ago | (#15284994)

BeOS didn't seem slow to me. No matter what I threw at it.

Re:multicompartment isolation (-1)

Anonymous Coward | more than 8 years ago | (#15285000)

The Titanic didn't have true "compartments". The Titanic was built like an ice cube tray. Fill up the front two ice cube molds and then the water will start to spill over to the next two.

Re:multicompartment isolation (1)

maynard (3337) | more than 8 years ago | (#15285024)

we are talking about a metaphor here. Are you suggesting that a modern multicompartement ship is a good metaphor to use in designing operating system kernels, whereas the Titanic would have been bad?

Re:multicompartment isolation (0)

Anonymous Coward | more than 8 years ago | (#15285065)

>>Are you suggesting that a modern multicompartement ship is a good metaphor to use in designing operating system kernels, whereas the Titanic would have been bad?

Yes. The Titanic had the appearance of a compartmentalized system, but in reality it was monolithic since flooding in any part of the ship could eventually access any other part.

Re:multicompartment isolation (1)

maynard (3337) | more than 8 years ago | (#15285097)

OK. So lets get to the meat of the argument, which is that message passing micokernels are slow by design and still prone to failure if a critical userland device driver process dies. Further, please show me how security is improved. If a userland process (say a shell) elevates privileges to root, how is this any different from a monolithic kernel based system?

Re:multicompartment isolation (1)

podperson (592944) | more than 8 years ago | (#15285039)

As the old joke goes, neither a real commercial microkernel operating system nor better development practices have been tried and found hard, they've both been found hard and left untried.

exactly. -nt (1)

maynard (3337) | more than 8 years ago | (#15285123)

. ..

Re:multicompartment isolation (2, Interesting)

Bush Pig (175019) | more than 8 years ago | (#15285047)

The Titanic wasn't actually _properly_ compartmentalised, as each compartment leaked at the top (unlike a number of properly compartmentalised ships built around the same time, which would have survived the iceberg).

Re:multicompartment isolation (1)

ArsenneLupin (766289) | more than 8 years ago | (#15285108)

Actually, the iceberg was "slicing" open the Titanic on a large part of its length, impacting several compartments. That's what did it in. Had only one compartment been affected, it would have stayed afloat.

Re:multicompartment isolation (2, Insightful)

WindBourne (631190) | more than 8 years ago | (#15285071)

didn't save the Titanic.

It actually took hitting something like half the compartments to sink her. If it had hit just one less compartment, she would have stay afloat. In contrast, one hole in a none compartmentalized ship can sink it.

That is no different than an OS. In just about any monolithic OS, one bug is enough to sink them.

Re:multicompartment isolation (1)

Daniel Dvorkin (106857) | more than 8 years ago | (#15285073)

Well, one of the reasons the Titanic sank was because the compartmentalization was incomplete; once it took a certain amount of water, it was riding low enough that more water could come in over the bulkheads. There's probably a lesson here.

Re:multicompartment isolation (1)

carlislematthew (726846) | more than 8 years ago | (#15285110)

Another thing: If a driver goes down, then I'm generally fucked anyway. "Awesome, my SCSI driver died but my system is still running! Cool! Oh wait, I can't do a damn thing now. Time to reboot I suppose."

I can't imagine that many cases where I would want to continue on if the kernel went bad. Give me auto-save in every app, and that will be fine... It's not like OS's are so horrendously unreliable that this is a common occurrence anyway.

Re:multicompartment isolation (0)

Anonymous Coward | more than 8 years ago | (#15285183)

You're missing the point. A microkernel allows the OS to fail cleanly. A macrokernel can't detect all errors (e.g. a write through a freed pointer), so a macrokernel can continue running while in a broken state causing all sorts of potential damage.

When will they learn? (1)

Ginger Unicorn (952287) | more than 8 years ago | (#15285164)

Why didn't they build it with 6001 hulls!?!!

The thing is... (5, Interesting)

gowen (141411) | more than 8 years ago | (#15284940)

Container ships don't have to move cargo from one part of the ship to another, on a regular basis. You load it up, sail off, and then unload at the other end of the journey. If the stuff in the bow had to be transported to the stern every twelve hours, you'd probably find fewer enormous steel bulkheads between them, and more wide doors.

Re:The thing is... (1)

plover (150551) | more than 8 years ago | (#15285105)

If the stuff in the bow had to be transported to the stern every twelve hours, you'd probably find fewer enormous steel bulkheads between them, and more wide doors.

And more ships rolling over due to accidental mismanagement of the weight distribution. You make a great point, the "boat" metaphor becomes a lot more relevant when it becomes dynamic.

Re:The thing is... (1)

Tim C (15259) | more than 8 years ago | (#15285107)

That's true, but I guarantee that they'd have a fail-safe method to shut those doors, forming a watertight seal.

About the only conclusion we can draw is that on reflection, it was a pretty silly analogy.

Re:The thing is... (1)

leuk_he (194174) | more than 8 years ago | (#15285175)

And before the anology police gets us.

The problem with microkernels is that if one component dies it does not take down the entire ship. However if the motor or the steering of a ship dies, the boat is still afloat, but the boat is pretty useless.

The same happens in a microkernel. The display driver may die and the rest for the ship continues. However the system can be pretty useless without a display.

Restarting the display driver and letting the application behave correcty to this is left as an excercise of the student. (replace display with file system is excercise no 2. )

Re: The thing is... (1)

Black Parrot (19622) | more than 8 years ago | (#15285144)

> Container ships don't have to move cargo from one part of the ship to another, on a regular basis. You load it up, sail off, and then unload at the other end of the journey. If the stuff in the bow had to be transported to the stern every twelve hours, you'd probably find fewer enormous steel bulkheads between them, and more wide doors.

Yeah, you got to be careful with analogies.

When it comes to security, imagine aliens trying to take over your ship. The bulkheads might be useful for constricting them to one area, but what's absolutely essential is to make sure you can't operate the security doors from within each section. Otherwise the invaders just open them and spread at will.

My point being that partitioning priviledged code into little chunks doesn't help anything; if you break in to one you've got the whole system. But it is useful to limit the amount of priviledged code to the minimum possible, and not mix in code that doesn't really need to be privileged. That way if the aliens break in, it's less likely to be in a compartment that has controls for the security bulkheads.

Re:The thing is... (4, Insightful)

crawling_chaos (23007) | more than 8 years ago | (#15285145)

Compartmentalization had very little to do with the advent of the container ship. Titanic was partially compartmented, but they didn't run above the waterline, so that the breach of several bow compartments led to overtopping of the remainder and the eventual loss of the ship. Lusitania and Mauretania were built with full compartments and even one longitudinal bulkhead because the Royal Navy funded them in part to use as auxilliary troopships. Both would have survived the iceberg collision, which really does make one wonder what was in Lusitania's holds when those torpedoes hit her.

Comparments do interfere with efficient operation, which is why Titanic's designers only went halfway. Full watertight bulkheads and a longitudinal one would have screwed up the vistas of the great dining rooms and first class cabins. It would also have made communication between parts of the ship more difficult as watertight bulkheads tend to have a limited number of doors.

The analogy is actually quite apt: more watertight security leads to decreased usability, but a hybrid system (Titanic's) can only delay the inevitable, not prevent it, and nothing really helps when someone is lobbing high explosives at you from surprise.

My last time was... (0)

Anonymous Coward | more than 8 years ago | (#15284945)

When was the last time your TV set crashed or implored you to download some emergency software update from the Web?

I dont know if anyone else has the misfortune to use a Sagem digital TV box, but mine crashes all the time. It FREQUENTLY has to update its channel list which stops me watching TV for 5-10 minutes.

I personally think that software is stable and secure at the moment, I havent had crashes for a long time. The problem is the users ability to install random programs which damage their privacy/security.

Resurrecting the dead (0, Redundant)

bradgoodman (964302) | more than 8 years ago | (#15284946)

I think time not only proved Tannanbaum wrong, but gave him a huge ass-whooping, and made him go into the kitchen and make him a pot-pie! Whatever "theoretical" basis may be true - the practical reality has told us otherwise. Below is a 1992 email debate between Torvalds and Tannanbaum. http://www.oreilly.com/catalog/opensources/book/ap pa.html [oreilly.com]

Talk about beating a dead horse (1)

daves (23318) | more than 8 years ago | (#15284947)

The Torvalds/Tanenbaum discussion has been done to death. Google for all that can be said on the subject.

Theory Vs. Practice (3, Interesting)

mikeisme77 (938209) | more than 8 years ago | (#15284958)

This sounds great in theory, but in reality it would be impractical. 2.5 million lines of code handling all of the necessary things the Linux Kernel handles really isn't that bad. Adding compartmentalization into the mix will only make it more complicated and make it more likely for a hole to spring somewhere in the "hull"--maybe only one compartment will be flooded then, but the hole may be harder to patch. However, I wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity. And isn't Windows XP or Vista like 30 million lines of code (or more)? That's a LOT more than double the size of the Linux kernel...

Re:Theory Vs. Practice (3, Informative)

Shazow (263582) | more than 8 years ago | (#15284998)

wouldn't rule compartmentalization out completely, but it should be understood that doing so will increase the complexity/size and not necessarily lower the size/complexity.

Just to clear things up, my understanding is that Tanenbaum is advocating moving the complexity out of kernel space to user space (such as drivers). So you wouldn't be lowering the size/complexity of the kernel altogether, you'd just be moving huge portions of it to a place where it can't do as much damage to the system. Then the kernel just becomes one big manager which tells the OS what it's allowed to do and how.

- shazow

Re:Theory Vs. Practice (3, Interesting)

mikeisme77 (938209) | more than 8 years ago | (#15285059)

But then you'd have issues with performance and such. The reason the current items are in the kernel to begin with have to do with the need for them to be able to easily communicate with one another and their need to be able to have system override access to all resources. It does make his claim more valid, but it's still not a good idea in practice (unless you're primary focus for an OS is security rather than performance). I also still think that this method would make the various "kernel" components harder to manage/patch--I put kernel in quotes because the parts that would be moved to user land would still be part of the kernel to me (even if not physically).

Re:Theory Vs. Practice (0)

Anonymous Coward | more than 8 years ago | (#15285100)

Then the kernel just becomes one big manager which tells the OS what it's allowed to do and how.
Does it also have pointy hair and frequently mention 'proactive multi-tiered synergy'?

Re:Theory Vs. Practice (5, Funny)

zhiwenchong (155773) | more than 8 years ago | (#15285139)

In theory, there is no difference between theory and practice. But, in practice, there is.

- Jan L.A. van de Snepscheut

Sorry, couldn't resist. ;-)

Have you hurd? (0)

Anonymous Coward | more than 8 years ago | (#15284961)

I Hurd on the grape vine that this guy isnt talking crap.

Screenshot of GNU Hurd (2, Funny)

njchick (611256) | more than 8 years ago | (#15285069)

$ ls /dev
Computer bought the farm

A compromise needs to be made. (5, Interesting)

Ayanami Rei (621112) | more than 8 years ago | (#15284969)

Most drivers don't need to run in kernel mode (read: any USB device driver)... or at least they don't need to run in response to system calls.
The hardware manipulating parts kernel should stick to providing higher-level APIs for most bus and system protocols and provide async-io for kernel and user space. If most kernel mode drivers that power your typical /dev/dsp and /dev/input/mouse and such could be rewritten as kernel-threads that dispatch requests to and from other kernel threads servicing physical hardware in the system you can provide fault-isolation and state reconstruction in the face of crashes without incurring much overhead. Plus user processes could also drive these interfaces directly so user space programs could talk to hardware without needing to load in dangerous, untrusted kernel modules (esp. from closed-source hardware vendors).

Or am I just crazy?

Yeah but microkernels seems like taking things to an extreme that can be accomplished with other means.

Proof is in the pudding (4, Interesting)

Hacksaw (3678) | more than 8 years ago | (#15284973)

I won't claim that Professor T is wrong, but the proof is in the pudding. If he could produce a kernel set up with all the bells and whistles of Linux, which is the same speed and demonstrably more secure, I'd use it.

But most design is about tradoffs, and it seems like the tradeoff with microkernels is compartmentalism vs. speed. Frankly, most people would rather have speed, unless the security situation is just untenable. So far it's been acceptable to a lot of people using Linux.

Notably, if security is of higher import than speed, people don't reach for micro-kernels, they reach for things like OpenBSD, itself a monolithic kernel.

Re:Proof is in the pudding (3, Insightful)

WindBourne (631190) | more than 8 years ago | (#15285092)

OpenBSD's security strength has NOTHING to do with the kernel. It has to do with the fact that mulitple trained eyes are looking over code. The other thing that you will note is that they do not include new code in it. It is almost all older code that has been proven on other systems (read netbsd, apple, linux, etc). IOW, by being back several revs, they are gaining the advantage of everybody else as well as their own.

Re:Proof is in the pudding (0)

Anonymous Coward | more than 8 years ago | (#15285114)

You mean "the proof of the pudding is in the eating"

Re:Proof is in the pudding (1)

Karma Farmer (595141) | more than 8 years ago | (#15285163)

You mean "you can't eat your pudding and still have it."

My Kernel (0)

Anonymous Coward | more than 8 years ago | (#15284976)

Just stopped in from the near future.

My computer has nothing but microkernels.

There is a small kernal running on each on my 32K processors.

Each processor handles on a very very small amount of the
work of my "Operating System" (we don't call it that anymore).

My computer has about 1K processors devoted to video, about
1K processors devoted to my hyperlink (Internet sucessor), and many, many other processors devoted to every service and program I run.
Presently my meter shows that I have 15K processors at "System Idle".

Just giving you a peek into the not so far future.
 
Future Boy.

Tanenbaum confirms it! (1)

zensonic (82242) | more than 8 years ago | (#15284987)

... Hurd lives!

Microkernels are useless on modern hardware (0)

Anonymous Coward | more than 8 years ago | (#15284992)

Microkernels might seem like a great idea at first, but on modern hardware they are as good as useless.
One problem is, when a driver crashes the hardware is in an unkown state, some hardware might be resetable, most hardware is not, so how are you going to resume the driver?
And then there is the even bigger problem of DMA capable hardware. There is basically no way to limit the fysical memory access for DMA devices such as PCI cards. You can DMA main memory to the PCI cards buffer (think of videocards etc.) and then read it from there.

Perhapse this will be addressed in future hardware, but until then microkernels are definitly not worth the extra overhead they impose.

personally.. (1)

DoctorDyna (828525) | more than 8 years ago | (#15284995)

I would like to see a kernel that is completely compartmentalized, maybe even with some "at install time" configuration options. Break it down into a tree that you can pick and choose parts from. Maybe this is a bit off topic, but it would be nice to not have to install, then get kernel sources, strip, reconfigure, recompile, install, hope it works...

Doesn't this compartment thing mean some sort of "plugin" style?

Minix3 (2, Interesting)

wysiwia (932559) | more than 8 years ago | (#15285003)

A paper might show the concept but only a real working sample will provide answers. Just wait until Minix3 (http://www.minix3.org/ [minix3.org] ) is finished and then lets see if it's slower or not and if it's saver or not.

O. Wyss

We will see.. (0)

Anonymous Coward | more than 8 years ago | (#15285017)

I want to see some benchmarks to compare Linux vs (insert the microkernel that Tanembaum talks about). Uh.. there are no benchmarks? There are no Tanembaum's microkernel to try? Sorry.. I will continue using Linux then.
http://www.servicerules.com.ar/ [servicerules.com.ar]

"Bus/Kernel" (1)

bigattichouse (527527) | more than 8 years ago | (#15285022)

Always kinda wondered when microkernels would be abstracted out as a bus where processor/code modules literally plug into...

Sheesh.... (0)

Anonymous Coward | more than 8 years ago | (#15285028)

The Linux Kernel has modules.

End of Story.

Lessons on evolutionary theory for Andy... (2, Insightful)

csoto (220540) | more than 8 years ago | (#15285031)

Dearest Andy, please take some University courses on evolutionary biology. Perhaps you will take away a meaningful sense of the differences between "optimal" and "sufficient." I agree 100% with what you say. "Microkernels are better." That being said, this does nothing to diminish the viability of Linux, or any other monolithic system. Evolution only requires that a species retain sufficient qualities to ensure survivability (and therefore reproduction) in a given environment. "Perfection" never enters the equation (not even qualifiers such as "best" or "better" - just "good enough").

So, let's all agree with Andy, then go on using the best tools for our purposes. If that happens to be Linux (or even Windoze), then so be it...

Hindsight is 20/20 (3, Insightful)

youknowmewell (754551) | more than 8 years ago | (#15285061)

From the link to the Linus vs. Tanenbaum arguement:

"The limitations of MINIX relate at least partly to my being a professor: An explicit design goal was to make it run on cheap hardware so students could afford it. In particular, for years it ran on a regular 4.77 MHZ PC with no hard disk. You could do everything here including modify and recompile the system. Just for the record, as of about 1 year ago, there were two versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M). The PC version was outselling the 286/386 version by 2 to 1. I don't have figures, but my guess is that the fraction of the 60 million existing PCs that are 386/486 machines as opposed to 8088/286/680x0 etc is small. Among students it is even smaller. Making software free, but only for folks with enough money to buy first class hardware is an interesting concept. Of course 5 years from now that will be different, but 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5."

Linux kernels is unstable? How about the BSDs? (0)

The_Isle_of_Mark (713212) | more than 8 years ago | (#15285078)

I don't profess to know everything about kernels, but I believe modularized kernels address this, right? Last I checked my kernel supports modules. Hmmm

Perhaps my kernel is the only one like this?

won't work in practice... (0)

Anonymous Coward | more than 8 years ago | (#15285091)

Using the car analogy, this would be akin to saying "This car is fault tolerant! Even if the steering wheel breaks, the engine will still be running!" Well, in theory, that is. In practice, you'll have to repair the car before you put it on the road again.

Same for microkernels.
What happens if there was a bug in the filesystem code? Sure, you could restart the "compartment" responsible for the filesystem, but then all programs that depends on files in that partition will have to be restarted. If the affected fs is the root "/" directory, probably all applications will have to be restarted.

And how is this different from a reboot?

Of course, there would be _some_ benefit to using microkernels, especially for less vital components such as the sound driver. But is it worth the trouble?

Besides, the inherent difficulty in real OS's is the number of possible hardware configurations. Some manufacturers produce buggy hardware. The OS / drivers have to work around those hardware bugs. So the code eventually becomes unmanagable due to the large number of workarounds, and then bugs will appear.

(Disclaimer: I know nothing about operating systems or hardware, I'm only relying on common sense. Correct me if I am wrong here.)

Interesting corollation (2, Interesting)

youknowmewell (754551) | more than 8 years ago | (#15285103)

The same arguements for using monolithic kernels vs. microkernels is the same sort of arguement for using C/C++ over languages like Lisp, Java, Python, Ruby, etc. I think maybe we're at a point that microkernels are now practical, same as with those high-level languages. I'm no kernel designer, but it seems reasonable that a monolithic kernel could be refactored into a microkernel.

Some old info on this! (0)

Anonymous Coward | more than 8 years ago | (#15285130)

If you'd like to see some old Linus vs Tanenbaum flames, check out here:

http://www.dina.dk/~abraham/Linus_vs_Tanenbaum.htm l#linus1 [www.dina.dk]

I'm gonna post this anonmymously so I don't get modded up or modded down, I just thought some of you would find this interesting. There is a link that leads to another mirror of this in another post, but it was slashdotted (or something else) and I couldn't get to it. Check this out, it's an interesting read! (please don't mod this offtopic, it's a great read on the subject at hand)

Titanic (0, Redundant)

JamieKitson (757690) | more than 8 years ago | (#15285143)

The unsinkable kernel!

I've hurd it all before (0)

Anonymous Coward | more than 8 years ago | (#15285149)

These theoretical ramblings are pointless. Show me a microkernel that lets me do everything I can do on Linux now and I will listen.

friendly conversation (3, Insightful)

audi100quattro (869429) | more than 8 years ago | (#15285151)

That friendly conversation is hilarious. "Linus: ...linux still beats the pants of minix in almost all areas"

"Andy: ...I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)"

The most interesting part: "Linus: The very /idea/ of an operating system is to use the hardware features, and hide them behind a layer of high-level calls. That is exactly what linux does: it just uses a bigger subset of the 386 features than other kernels seem to do. Of course this makes the kernel proper unportable, but it also makes for a /much/ simpler design. An acceptable trade-off, and one that made linux possible in the first place."

Agreed with tanenbaum (1)

orbitalia (470425) | more than 8 years ago | (#15285169)

The only thing a kernel should do is scheduling and message passing, all other services should be separate objects that utilise the message passing (via the kernel) to other objects in userspace.

I don't buy the lack of performance argument, QNX seems to work just fine performance wise (the CRS1 Cisco top of the line router uses QNX as the basis of its router software). The 'Q' in QNX does stand for Quick after all ;)

There are alot of benefits not only to stability and possibly security that you get from microkernels, one great thing is that the applications can also be written along the same lines (using the ineherent message passing) and large applications therefore tend to be truely object oriented with well defined interfaces if written correctly. It is a quite nice benefit to be able to patch part of a running program without restarting it (high availbility benefits here). A badly written driver should not be able to take down a kernel. Also with the advent of multicore, microkernel systems make alot more sense and are much much easier to scale up with extra cores. (Speaking from years of experience coding on microkernel systems here).

Microkernels also provide really small install images, it is really easy to cut away all unnecessary drivers and subsystems in a microkernel system.

There are lots of benefits and I agree that with the current state in hardware the kernel should be looked at again.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?