Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Multi-Server Microkernel OS Genode 12.11 Can Build Itself

Unknown Lamer posted about a year and a half ago | from the and-they-say-microkernels-won't-work dept.

Operating Systems 102

An anonymous reader wrote in with a story on OS News about the latest release of the Genode Microkernel OS Framework. Brought to you by the research labs at TU Dresden, Genode is based on the L4 microkernel and aims to provide a framework for writing multi-server operating systems (think the Hurd, but with even device drivers as userspace tasks). Until recently, the primary use of L4 seems to have been as a glorified Hypervisor for Linux, but now that's changing: the Genode example OS can build itself on itself: "Even though there is a large track record of individual programs and libraries ported to the environment, those programs used to be self-sustaining applications that require only little interaction with other programs. In contrast, the build system relies on many utilities working together using mechanisms such as files, pipes, output redirection, and execve. The Genode base system does not come with any of those mechanisms let alone the subtle semantics of the POSIX interface as expected by those utilities. Being true to microkernel principles, Genode's API has a far lower abstraction level and is much more rigid in scope." The detailed changelog has information on the huge architectural overhaul of this release. One thing this release features that Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.

cancel ×

102 comments

Sorry! There are no comments related to the filter you selected.

No plans for LLVM (4, Informative)

Bogtha (906264) | about a year and a half ago | (#42161543)

For anybody wondering [osnews.com] :

Switching from GCC to LLVM is not planned. From what I gathered so far, LLVM is pretty intriguing and I am tempted to explore it. But on the other hand, we are actually quite happy with our current GCC-based tool chain.

Re:No plans for LLVM (0, Troll)

Anonymous Coward | about a year and a half ago | (#42161679)

Why the fuck would you use anything other than GCC? I've been using it for years, and it just keeps getting better and better.

Re:No plans for LLVM (1, Interesting)

loufoque (1400831) | about a year and a half ago | (#42161721)

In particular, because it is very rigid in the tools it needs to work with, making it more complicated to have a full working toolchain on exotic platforms. Hurd still doesn't have: working sound support. For those unfamiliar with multi-server systems, the project has a brief conceptual overview document.

clang/llvm can actually cross-compile to several different architectures with the same binary. That thing would be absolutely impossible with GCC.

Re:No plans for LLVM (1)

loufoque (1400831) | about a year and a half ago | (#42161725)

A bad copy/paste happened here, sorry.

Re:No plans for LLVM (3, Insightful)

Anonymous Coward | about a year and a half ago | (#42161871)

I'd rather concentrate on getting server code running natively no matter the toolchain used.

"We have a microkernel that can compile with LLVM" is not as cool as "run your apache pg and php/java/whatever in a microkernel built with security and accountability in mind".

Re:No plans for LLVM (2, Interesting)

Entrope (68843) | about a year and a half ago | (#42162009)

Microkernels are long on the "security and accountability" hype and somewhat short on reality. Sure, the services provided by the microkernel are less likely to have bugs or holes than a monolithic kernel -- but that's because the microkernel doesn't provide most of the monolithic kernel's functionality. Once you roll in all the device drivers, network stack, and the rest, the microkernel-based system is generally at least as bloated and typically less performant.

Re:No plans for LLVM (2, Interesting)

HornWumpus (783565) | about a year and a half ago | (#42162501)

Come back when you get the point. Kernel space is shared memory, a kernel mode component can crash the system and leave no trace of what did it. Like pre X MacOS or DOS.

And never say or type 'performant' again. It makes you look like a douche. 'less performant' == 'slower'.

Everybody knows mircrokernels are slower. They are more stable. Misbehaving drivers are identified quickly. They usually have fewer issues and the issues they have don't take the whole system down.

That said, count the context switches needed to draw a single pixel.

Re:No plans for LLVM (3, Interesting)

Entrope (68843) | about a year and a half ago | (#42163953)

I would say that you're the one who needs to get the point. Major components that crash will still generally leave the system in a state that is difficult or impractical to diagnose or recover. If your disk driver or filesystem daemon crashes, you don't have many ways to log in or start a replacement instance. If your network card driver or TCP/IP stack crashes, you still need a remote management console to fix that web server. In the mean time, people with modern kernels have figured out how to make those monolithic kernels still fairly usable in spite of panics or other corruption. The only reason that microkernels look better on the metrics you claim is that they support less hardware and use less of the hardware's complex (high-performance) features.

Re:No plans for LLVM (3, Insightful)

FrangoAssado (561740) | about a year and a half ago | (#42166395)

a kernel mode component can crash the system and leave no trace of what did it. Like pre X MacOS or DOS.

... and Linux, NT, and the Mac OS X kernel (XNU).

NT and the Mac OS X kernels are interesting cases: they started as microkernels, but soon moved on to "hybrid" approaches that keep a lot of drivers inside kernel space.

Everybody knows mircrokernels are slower. They are more stable. Misbehaving drivers are identified quickly. They usually have fewer issues and the issues they have don't take the whole system down.

That sounds great in theory, but if a disk or network driver crashes on a production server, how much do you care that the rest of the system is still working? These things must not crash, period -- if they do crash, the state of the rest of the system is usually irrelevant.

Re:No plans for LLVM (3, Interesting)

drinkypoo (153816) | about a year and a half ago | (#42169747)

if a disk or network driver crashes on a production server, how much do you care that the rest of the system is still working? These things must not crash, period -- if they do crash, the state of the rest of the system is usually irrelevant.

That's not really true. The storage driver can ask the disk driver which blocks (or whatever you call them) have been successfully written, and not retire them from the cache until they have been recorded. And hopefully one day we will get MRAM, and then we'll have recoverable ramdisks even better than the ones we had on the Amiga -- where they could persist through a warm boot, simply getting mapped again. So you could load your OS from floppy into RAM, but you'd only have to do it once per cold boot, which is nice because the Amiga would crash a lot because it had no memory protection...

This conversation is especially interesting because the Amiga was a microkernel-based system with user-mode drivers, which is much of how they solved hardware autoconfiguration; you could include a config rom and the OS would load (in fact, run) your driver process from it. This was enough at least for booting, and then you could load any updated drivers which can kick the old driver out of memory. And now we have reached the limits of what I know about it :)

If the network card driver crashes, the same thing is true. The network server knows which packets have been ACKed and which ones haven't, and it knows the sequence number of the last packet it received. The driver is restarted, some retransmits are requested, and everything proceeds as normal. The only case in which the user even has to notice is when the driver is crashing so fast that it can't do any useful work before it does so.

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42161757)

Why the fuck would you use anything other than GCC? I've been using it for years, and it just keeps getting better and better.

Because it's a bloated piece of shit? Try out pcc or LLVM sometimes. Sure, they don't have all the optimizations GCC has, but programs build significantly faster. Also, I'm pretty sure GCC is the only compiler that requires the better part of an afternoon on modern hardware to build itself.

Re:No plans for LLVM (1)

loufoque (1400831) | about a year and a half ago | (#42161767)

LLVM optimizes better than GCC in quite a few cases, in particular since 4.0

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42162109)

By that argument, they should use intel's C++ compiler (it has non-commercial licenses available) because it produces more optimized code than GCC or LLVM.

It's usually better to go with what works at first, and think about optimization once the basics are done.

Re:No plans for LLVM (0)

IQgryn (1081397) | about a year and a half ago | (#42161797)

Also, I'm pretty sure GCC is the only compiler that requires the better part of an afternoon on modern hardware to build itself.

With parallel make, it only takes me about 20 minutes on a midrange 6-core system. Look at the -j or --jobs option. I usually use 1.5 times the number of cores for the number of jobs.

Re:No plans for LLVM (2, Insightful)

loufoque (1400831) | about a year and a half ago | (#42161865)

There are no mid-range 6-core systems.
Mid-range is dual core with hyperthreading.

Re:No plans for LLVM (3, Informative)

serviscope_minor (664417) | about a year and a half ago | (#42162107)

That's just not correct.

A Phenom II x6, especially the lower clocking ones are certainly not high end any more.

A dual core system is now certainly low end, given even netbooks have dual core processors.

Plenty of ultrabooks come with quad core processors these days, and they are not especially high speed machines, trading speed for power consumption and size.

Re:No plans for LLVM (0)

loufoque (1400831) | about a year and a half ago | (#42162137)

AMD does not make any good processors anymore.

Intel i3 is the low end, i5 the middle end, and i7 the high end.
i5 are usually dual core processors, while i7 are usually quad core.

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42162331)

no ALL i3 are dualcore with FOUR threds (hyperthreading) exception are mobile CPUs but real developers use workstations not laptops (lowrange costs around $100/CPU)
ALL I5 are quad (real quad no hyperthreading)
ALL I7 are eight core (hyperthreading 4+4) and they start from less than $300 so i would say this is mid-range and not I5

that is for last generation (ivy bridge)
reference: http://en.wikipedia.org/wiki/LGA_2011#Desktop [wikipedia.org]

if you go to previous sandy bridge generation you can find some high end (LGA-2011) processors with 6 core 12 threads ($550), or you can even use XEON CPU with 8 core 16 threads ($1000+ CPUs) those can be used in 2/4/8 CPU configuration but that gets expensive fast
reference: http://en.wikipedia.org/wiki/LGA_2011#Desktop [wikipedia.org]
and http://en.wikipedia.org/wiki/LGA_2011#Server [wikipedia.org]

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42163541)

These caps, they do not mean what you think they mean.

Posted from my dual-core, 4 thread i7-2720

Re:No plans for LLVM (1)

LordLimecat (1103839) | about a year and a half ago | (#42167537)

For the record, I just built my home computer with 8 cores and 32GB of ram for around $450-500. For buying AMD I also get AES acceleration, ECC support, turbo clocking, all of the virtualization features, and a number of other features that simply arent available on Intel till you hit the i5/i7 level.

If you can show me how I could get 8 cores or the equivalent for heavily nested virtualization labs (ESXi / HyperV on top of Workstation) on the intel platform, I would be interested; however everything I saw indicted that I was looking at about $200-400 more for "usually faster, but not necessarily on VMWare". Keep in mind that hyperthreading isnt the same as AMD's 8 cores, particularly when it comes to virtualization.

Re:No plans for LLVM (1)

drinkypoo (153816) | about a year and a half ago | (#42169871)

That's a nice RAM. I am maybe $700 into my PC, but it's on its second processor (When from Phenom II X3 to X6) and it's got a HDD and a SSD and two optical drives and I started it back when a Phenom II X3 720 was a pretty slick processor. And I have a whopping 8GB.

I went to an X6 because single-thread performance wasn't really my limiting factor. Maybe that's because I run Linux and I don't play the latest greatest masturbatest games, and I only have a 1680x1050 display. But really, I haven't noticed a decrease in single-thread performance (I didn't bother benchmarking) and meanwhile compiles and media encoding has gotten noticeably faster.

Hyperthreading is cool for sporadic workloads where none of your cores are actually pegged. When it comes time to do multimedia encoding it's not that exciting. For a hundred bucks I got six real cores and still get pretty decent single-thread boosted performance... and that's not just single-thread, it's still three cores at higher speed!

Could I have had more performance? Yes, but I would have had to spend at least twice as much on the motherboard. Remember, this was quite some time ago. For an intel-chipset board as full-featured as my AMD-chipset board and from as reputable a brand (gigabyte... not the best, not the worst) I was having a hard time finding anything even as cheap as $200, and I paid $100 for my GA-MA770-UD3P 1.0 which had everything but USB3... which I just added via a NEC-chipset card sourced from eBay for $12 (2 external, header internal, got a card reader coming from DX to plug into the header.)

I imagine that if I still considered myself a hardcore gamer, and I was playing new games, I'd have an intel processor optimized for single-thread performance. Instead, I am a dabbler, and I have the system with the best price:performance ratio for dabbling. (And my $100 240 GT is still beating the budget cards, at least... Four times the fill rate of, for example, a GT 610. But that's $10 with MIR.)

Re:No plans for LLVM (1)

bill_mcgonigle (4333) | about a year and a half ago | (#42162371)

AMD does not make any good processors anymore.
Intel i3 is the low end, i5 the middle end, and i7 the high end.

My Phenom x2 server cluster >> your tautology.

Phenom x2 6-core is currently one maximum in the price/power/performance 3-space. All eight corners of the 'cube' have valid use cases.

Re:No plans for LLVM (1)

loufoque (1400831) | about a year and a half ago | (#42162677)

Server cluster? I thought we were talking about average middle end desktop/workstation computers.

Re:No plans for LLVM (1)

bill_mcgonigle (4333) | about a year and a half ago | (#42163035)

Server cluster? I thought we were talking about average middle end desktop/workstation computers.

Yeah, a decent node is under $500 by using "desktop" hardware. The beauty of a redundant architecture is that "server-quality" hardware isn't that important anymore. I know how to spend 10x that on a really fast server, but most workloads don't justify the added expense.

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42162613)

i3, i5 or i7 for a server? Are you out of your mind? Even for my desktop computer, I wouldn't go below a xeon.

A phenom x6 is slower for video games than an i5 or i7, but for a server, they are a better choice than an i5 or i7 (which should never go into a server).

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42163675)

you do realize that each desktop iteration goes translates almost directly to the servers...

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42163849)

Yes, but missing things like VT-d and support for ECC memory. I'd bet that they were the same hardware, but burned with a small feature rom disabling the extras unless you fork cash over to Intel for artificial market segmentation.

Re:No plans for LLVM (1)

LordLimecat (1103839) | about a year and a half ago | (#42167549)

Theres nothing wrong with artificial market segmentation. However, it IS the reason I went with AMD, since theres no reason to burn $300 for processor features that every AMD processor comes with.

Re:No plans for LLVM (1)

Belial6 (794905) | about a year and a half ago | (#42162775)

My son's AMD A10 laptop beats my i5 laptop easily. They cost us about the same.

Re:No plans for LLVM (1)

ByOhTek (1181381) | about a year and a half ago | (#42167579)

given the post you made (the GP to the post I'm replying to). You just said there are low end 6-core systems.

So... are you saying 6 core is low end, but dual core + HT is mid range?

AMD reaches into the mid range, and they usually have a low to mid range CPU that is worth the money. Where AMD fails is single core/CPU performance, they still usually scale better than Intel.

Re:No plans for LLVM (1)

loufoque (1400831) | about a year and a half ago | (#42169951)

You just said there are low end 6-core systems.

Where did I say that? Because I didn't.

Re:No plans for LLVM (2)

Gerald (9696) | about a year and a half ago | (#42161843)

Because GCC doesn't have a static analyzer (you do analyze your code, right?) LLVM's analyzer (Clang's scan-build) is very good. Visual C++'s analyzer was crap a few releases ago but even it is getting better. I like GCC but it has a lot of catching up to do in this regard. And no, "-Wall" isn't nearly the same.

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42163177)

What you fail to mention is that the static analyzer for C++ is something between not-existing and unusable, making it useless for the project at hand.

Re:No plans for LLVM (0)

Anonymous Coward | about a year and a half ago | (#42164797)

Valgrind.

Re:No plans for LLVM (1)

Gerald (9696) | about a year and a half ago | (#42164879)

...is a really good dynamic analyzer. Again, not nearly the same.

1990 called (0, Flamebait)

Anonymous Coward | about a year and a half ago | (#42161627)

It wants its kernel development philosophy back.

Re:1990 called (0)

Anonymous Coward | about a year and a half ago | (#42161699)

They also want that joke back.

Re:1990 called (0)

Anonymous Coward | about a year and a half ago | (#42162419)

Cannot make a time machine with 1990 kernel development philosophy.

user space drivers (2)

phantomfive (622387) | about a year and a half ago | (#42161775)

Linux let's you write drivers in the user space if you want to. A lot of scanner drivers are written in the userspace. So if you're willing to take the performance hit, there is no reason to not do so, even in Linux.

Re:user space drivers (1)

johnw (3725) | about a year and a half ago | (#42161895)

Linux let's you write drivers in the user space if you want to. A lot of scanner drivers are written in the userspace. So if you're willing to take the performance hit, there is no reason to not do so, even in Linux.

Perhaps the difference here is that Linux lets you put them in userspace, but this system (like the GEC 4000 series from the '70s) has them all like that?

Why does putting a driver in user space require a performance hit?

Re:user space drivers (1)

phantomfive (622387) | about a year and a half ago | (#42161927)

Why does putting a driver in user space require a performance hit?

It has in every microkernel attempt so far, or do you have a way to do it that no one else has thought of?

Re:user space drivers (1)

johnw (3725) | about a year and a half ago | (#42161959)

Why does putting a driver in user space require a performance hit?

It has in every microkernel attempt so far, or do you have a way to do it that no one else has thought of?

I meant the question to be taken literally - that is, not as an assertion that it doesn't or shouldn't, but as a request for an explanation of why it does.

Re:user space drivers (3, Interesting)

phantomfive (622387) | about a year and a half ago | (#42162003)

I believe it's because you need to verify a lot of things that come from user space into kernel space. This makes things like DMA and port communication somewhat more difficult.

Re:user space drivers (1)

bill_mcgonigle (4333) | about a year and a half ago | (#42162995)

I believe it's because you need to verify a lot of things that come from user space into kernel space. This makes things like DMA and port communication somewhat more difficult.

Right, though to be fair implementing a microkernel on hardware that doesn't do anything to make microkernels efficient tends to be inefficient. Surprising, of course.

I wonder what people are doing with VPro and microkernels these days (they must be, but I admit to having stopped paying attention to microkernel a decade ago).

Re:user space drivers (1)

phantomfive (622387) | about a year and a half ago | (#42163733)

Does it matter much if you put the slowdown in hardware or software? You're still going to have to deal with context switching.

Re:user space drivers (1)

bill_mcgonigle (4333) | about a year and a half ago | (#42163761)

Does it matter much if you put the slowdown in hardware or software? You're still going to have to deal with context switching.

Apparently so - I hear from Xen and VMWare folks that VPro-enabled resource sharing is much faster than doing it in the hypervisor.

Re:user space drivers (1)

phantomfive (622387) | about a year and a half ago | (#42163927)

Hmmm it would be interesting to see if a microkernel could take advantage of it.

Re:user space drivers (1)

ByOhTek (1181381) | about a year and a half ago | (#42167599)

It's still reducing the time overhead (and probably heat overhead, since it's a less generic mechanism). It's still there as opposed to... not there. There's just less of it.

Re:user space drivers (1)

Elbereth (58257) | about a year and a half ago | (#42162165)

Wikipedia [wikipedia.org] has a pretty decent overview. It's actually kind of interesting and not too technical. Basically, it involves more system calls. Think of it as having more middle men involved in the process. Early microkernels implemented rather inefficient designs, leading people to believe that the concept itself was inefficient. Newer evidence reveals that it isn't quite that bad, and that it's possible to be very competitive with monolithic kernels.

My own understanding of the whole thing is rather shallow, so I can't really get very technical. I've always been somewhat interested in this sort of thing, but not so much that I was willing to pay rapt attention in my compsci classes.

Very Simple (3, Informative)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42162753)

All interrupts in processors are handled in a single context, the 'ring 0' or 'kernel state'. Device drivers (actual drivers that is) handle interrupts, that's their PURPOSE. When the user types a keystroke the keyboard controller generates an interrupt to hardware which FORCES a CPU context switch to kernel state and the context established for handling interrupts (the exact details depend on the CPU and possibly other parts of the specific architecture, in some systems there is just a general interrupt handling context and software does a bunch of the work, in others the hardware will set up the context and vector directly to the handler).

So, just HAVING an interrupt means you've had one context switch. In a monolithic kernel that could be the only one, the interrupt is handled and normal processing resumes with a switch back to the previous context or something similar. In a microkernel the initial dispatching mechanism has to determine what user space context will handle things and do ANOTHER context switch into that user state, doubling the number of switches required. Not only that but in many cases something like I/O will also require access to other services or drivers. For instance a USB bus will have a USB driver, but layered on top of that are HID drivers, disk drivers, etc, sometimes 2-3 levels deep (IE a USB storage subsystem will emulate SCSI, so there is an abstract SCSI driver on top of the USB driver and then logical disk storage subsystems on top of them). In a microkernel it is QUITE likely that as data and commands move up and down through these layers each one will force a context switch, and they may well also force some data to be moved from one address space to another, etc.

Microkernels will always be a tempting concept, they have a certain architectural level of elegance. OTOH in practical terms they're simply inefficient, and most of the benefits remain largely theoretical. While it is true that dependencies and couplings COULD be reduced and security and stability COULD improve, the added complexity generally results in less reliability and less provable security. Interactions between the various subsystems remain, they just become harder to trace. So far at least monolithic kernels have proven to be more practical in most applications. Some people of course maintain that the structure of OSes running on systems with large numbers of (homogeneous or heterogeneous) will more closely resemble microkernels than standard monolithic ones. Of course work on this sort of software is still in its infancy, so it is hard to say if this may turn out to be true or not.

Re:Very Simple (3, Informative)

david.given (6740) | about a year and a half ago | (#42164307)

Most operating systems these days don't run device driver interrupt handling code directly in the interrupt handler --- it's considered bad practice, as not only do you not know what state the OS is in (because it's just been interrupted!), which means you have an incredibly limited set of functionality available to you, but also while the interrupt handler's running some, if not all, of your interrupts are disabled.

So instead what happens is that you get out of the interrupt handler as quickly as possible and delegate the actual work to a lightweight thread of some description. This will usually run in user mode, although it's part of the kernel and still not considered a user process. This thread is then allowed to do things like wait on mutexes, allocate memory, etc. The exact details all vary according to operating system, of course.

This means that you nearly always have an extra couple of context switches anyway. The extra overhead in a well designed microkernel is negligible. Note that most microkernels are not well designed.

L4 is well designed. It is frigging awesome. One of its key design goals was to reduce context switch time --- we're talking 1/30th the speed of Linux here. I've seen reports that Linux running on top of L4 is actually faster than Linux running on bare metal! L4 is a totally different beast to microkernels like Mach or Minix, and a lot of microkernel folklore simply doesn't apply to L4.

L4 is ubiquitous on the mobile phone world; most featurephones have it, and at least some smartphones have it (e.g. the radio processor on the G1 runs an L4-based operating system). But they're mostly using it because it's small (the kernel is ~32kB), and because it provides excellent task and memory management abstraction. A common setup for featurephones is to run the UI stack in one task, the real-time radio stack in another task, with the UI stack's code dynamically paged from a cheap compressed NAND flash setup --- L4 can do this pretty much trivially.

This is particularly exciting because it looks like the first genuinely practical L4-based desktop operating system around. There have been research OSes using this kind of security architecture for decades, but this is the first one I've seen that actually looks useful. If you haven't watched the LiveCD demo video [youtube.com] , do so --- and bear in mind that this is from a couple of years ago. It looks like they're approaching the holy grail of desktop operating systems which, is to be able to run any arbitrary untrusted machine code safely. (And bear in mind that Genode can be run on top of Linux as well as on bare metal. I don't know if you still get the security features without L4 in the background, though.)

This is, basically, the most interesting operating system development I have seen in years.

Re:Very Simple (1)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42165593)

Crap, it may be a holy grail for x86 but only because x86 virtualization sucks so bad. Go run your stuff on a 360/Z/P series architecture and you've been able to do this stuff since the 1960s because you have 100% airtight virtualization.

Of course ANY such setup, regardless of hardware, is only as good as the hypervisor. It is still not really clear what is actually gained. Truthfully no degree of isolation is bullet proof because whatever encloses it can look at it and there will ALWAYS be some set of inputs to that wrapping layer that will subvert it.

In any case, I'm not up on L4. In my OS design days if you wanted that level of performance you simply ran in a single flat address space with relocatable code (OS9/68k for example) and then ran your security code in a separate processor.

Re:Very Simple (0)

Anonymous Coward | about a year and a half ago | (#42167355)

Most operating systems these days don't run device driver interrupt handling code directly in the interrupt handler --- it's considered bad practice, as not only do you not know what state the OS is in (because it's just been interrupted!), which means you have an incredibly limited set of functionality available to you, but also while the interrupt handler's running some, if not all, of your interrupts are disabled.

So instead what happens is that you get out of the interrupt handler as quickly as possible and delegate the actual work to a lightweight thread of some description.

True so far.

This will usually run in user mode, although it's part of the kernel and still not considered a user process.

But this part is nonsense (at least on Windows, Linux, and FreeBSD; I don't know enough about other operating systems to sensibly comment on those). Microkernels run driver-related stuff in user mode; monolithic kernels run driver-related stuff in kernel mode. That's kind of the whole point. In a monolithic design, you stay on a single set of page tables, and keep the privileged bit set, for the entire duration of the device operation, whereas in a microkernel you have to keep toggling the bit and switching page tables. That's why microkernels do a better job of fault isolation, and also why they tend to be slower. You really can't have one without the other (at least not on hardware where switching protection contexts has non-trivial cost).

One of its key design goals was to reduce context switch time --- we're talking 1/30th the speed of Linux here.

Sure, if you completely ignore the cost of blowing the TLB and trashing most of the processor caches. Those costs tend to be harder to measure than the direct costs of the instructions involved in the context switch, and so most microkernel papers tend to just assume they're free, but when you do measure them they usually end up as >75% of the total cost of the switch. On more accurate measures L4 context switches usually come out at a few tens of percent quicker than Linux ones, which just isn't enough to pay for having to do far more of them.

Also, anyone who thinks L4 provides an excellent memory management abstraction has clearly never actually tried to use it. L4's memory abstraction is certainly fast, in the sense that it does the minimum amount of work to ensure safety, but it's not exactly fun to work with.

Re:Very Simple (0)

Anonymous Coward | about a year and a half ago | (#42169911)

Which L4 implementation are you talking about in particular? There are quite a few of them.

Re:user space drivers (1)

Ignacio (1465) | about a year and a half ago | (#42161997)

Why does putting a driver in user space require a performance hit?

A context switch between processes in the same privilege level happens relatively quickly, but a context switch across privilege levels (e.g. calling user code from the kernel or vice versa) is much slower due to the mechanism involved.

Re:user space drivers (1)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42162777)

ALL context switches are expensive. The primary effect of a context switch is that each context has its own memory address layout. When you switch from one to another your TLB (translation lookaside buffer) is invalidated. This creates a LOT of extra work for the CPU as it cannot rely on cached data (addresses are different, the same data may not be in the same location in a new context) and consequent cache invalidation, etc. It really doesn't matter if it is 'user' or 'kernel' level context, the mechanics are all the same.

Re:user space drivers (1)

Unknown Lamer (78415) | about a year and a half ago | (#42163891)

Luckily, virtualization requirements have led to tagged TLBs becoming available on at least x86. I think the number of processes that can share the TLB currently is fairly limited, but it's a start.

Re:user space drivers (1)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42165513)

Yeah, this is true. I think if you were to start at zero and design a CPU architecture with a microkernel specifically in mind some clever things would come out of that and help even the playing field. Of course the question is still whether it is worth it at all. Until microkernels show some sort of qualitative superiority there's just no real incentive.

Re:user space drivers (1)

Unknown Lamer (78415) | about a year and a half ago | (#42166241)

The worst part is that, until the mid 90s, there were architectures that made things convenient for garbage collection, heavy multithreading, type checking, etc. And then the C machine took over and ... oops, now we need to speed up all of those things, but are stuck with architectures that make it difficult!

Re:user space drivers (1)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42167977)

Well, I gotta say, there is less diversity out there. OTOH you really had to be doing some niche stuff even in the old days to be writing code for Novix chips, transputers, Swann systems, and such.

Re:user space drivers (1)

pclminion (145572) | about a year and a half ago | (#42163937)

ALL context switches are expensive. The primary effect of a context switch is that each context has its own memory address layout.

No, that's not correct. Context switches between threads within the same process (or between one kernel thread and another), or context switches due to system calls, do not alter the page tables and do not flush the TLB. The vast majority of context switches are due to system calls, not scheduling. In a system call, the overhead is primarily due to switching in and out of supervisor mode.

This creates a LOT of extra work for the CPU as it cannot rely on cached data (addresses are different, the same data may not be in the same location in a new context) and consequent cache invalidation, etc.

Again, incorrect. CPU cache (not TLB) is tagged by physical, not virtual, address. Changes to the page tables are irrelevant to the cache.

Re:user space drivers (1)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42165557)

Really depends on the CPU architecture. You can't generalize a lot about that kind of thing. TLB is invalidated in x86. I'm a little sketchy on the ARM situation, but 68k and PPC architectures have a rather different setup than x86.

Context switches between threads generally aren't as expensive, yes, because the whole point with threads is shared address space, which is primarily for this very reason. However, there are still issues with locality, instruction scheduling, etc. There ARE also often changes in addressing if for no other reason than supervisor needs different tables to specify its different permissions. Again, there are many different impacts on performance. It is really not material if this is because of a change in mode or in addressing, these are exactly the switches that microkernels amplify.

Re:user space drivers (1)

ultranova (717540) | about a year and a half ago | (#42163401)

Does microkernel architecture necessarily require context switches? Write the userspace components in Java or other managed language and run them in kernel threads at Ring 0. You might get a small penalty in code execution time, but get rid of the context switches while still keeping the processes separate.

Re:user space drivers (1)

simcop2387 (703011) | about a year and a half ago | (#42162337)

It's usually because it requires the actual talking to the hardware to require a context change from userspace to kernel space on x86 based systems (I suspect the other major archs have similar issues but don't know for certain). This is because userspace is normally protected from touching hardware so that it can't cause side effects to other processes without the kernel knowing about it. A good microkernel should be able to give that access directly to userspace but I don't believe most CPUs play nicely with that idea currently, though if they did it could greatly reduce any performance hits to where it's negligable for most loads.

Re:user space drivers (0)

Anonymous Coward | about a year and a half ago | (#42162245)

Win7 lets you write user-mode graphics drivers without the performance hit of user-mode. A well defined interface with shared memory can make things quite fast, and when your driver crashes, it doesn't take down the system.

On a guess? DirectX "to the rescue"... apk (0)

Anonymous Coward | about a year and a half ago | (#42162545)

Per my subject-line above: You noted Win7 - which doubtless means it holds true for VISTA, Server 2008 + 2008R2, & Server 2012 also.

* I.E.-> The DDK (device driver kit) gives you the stable display driver template, & "off you go" writing a 'PnP' (Plug-n-Play) usermode video display driver... one that's STABLE & yet fast, in usermode operations in AEROGLASS display!

(Not sure otherwise such as in "Classic" desktop which afaik, reverts to GDI + User32 subsystem usage for display)!

A "return to yesteryear" in a way - since Windows NT 3.1-3.5-3.51 had video display (albeit, via GDI & User32 subsystems) in 'usermode'/rpl 3/ring 3, vs. 'kernelmode'/rpl 0/ring 0.

Can't "crash" the entire OS this way, much less the kernel, operating in usermode, but w/out DirectX "bypassing/lightening up" the context switches from usermode to kernelmode (as it was in older NT's noted above), you had that "slowup"... not with DirectX!

(Correct me when/if/where I am incorrect - I am NOT an "expert" here & only possess a cursory understanding (haven't written any device drivers for display's why)).

APK

P.S.=> Good points from you on this note... & again, I can stand correction or more information here possibly: So, if you have it? "Let it rip" - I am here to gain & learn!

... apk

nerd parlor game proposal (0)

Trepidity (597) | about a year and a half ago | (#42161795)

Every time the word "Genode" appears in their documentation, misread it as "Genocide".

Re:nerd parlor game proposal (0)

Anonymous Coward | about a year and a half ago | (#42162533)

Every time the word "Genode" appears in their documentation, misread it as "Genocide".

I read it as genocide even in the /. headline, so my suggestion would be to change the name. They need a better brand.
How about Holost? Seems to be available.

Re:nerd parlor game proposal (2)

osu-neko (2604) | about a year and a half ago | (#42162573)

Interesting. I misread "Geode", which is only one character difference. "Genocide" seems like quite a stretch, both more characters difference and requiring you to actually insert stuff that's not there rather than simply miss something. In other words, you have to overlook something to read it as "Geode" (as I did), but have to hallucinate to read it as "Genocide"...

Re:nerd parlor game proposal (1)

Ian Alexander (997430) | about a year and a half ago | (#42165229)

Research has shown that people tend to just look at the beginning and end of a word and its approximate length to guess what the whole word actually is. In which case both Geode and Genocide are plausible misreads.

Genocide OS (-1)

Anonymous Coward | about a year and a half ago | (#42161817)

For all you Obamas out there... great name!

Hurd device drivers aren't in user space? (1)

ndogg (158021) | about a year and a half ago | (#42161833)

I thought I read somewhere (and part of why I remember) that Hurd device drivers are also in user space.

Is that wrong?

Re:Hurd device drivers aren't in user space? (1)

loufoque (1400831) | about a year and a half ago | (#42161879)

Where did you see that was not true?

Re:Hurd device drivers aren't in user space? (1)

ndogg (158021) | about a year and a half ago | (#42165179)

It was implied in the summary.

think the Hurd, but with even device drivers as userspace tasks

Re:Hurd device drivers aren't in user space? (0)

Anonymous Coward | about a year and a half ago | (#42161957)

Yes, classic Hurd is based on Mach, which has all device drivers in kernel space. Classic Hurd is actually a really terrible piece of software, full of bad design decisions.

Re:Hurd device drivers aren't in user space? (1)

unixisc (2429386) | about a year and a half ago | (#42167563)

Can HURD be re-written such that it uses the Minix3 microkernel instead of Mach3, and then puts the drivers in userspace? (Don't cite licensing issues - assume for this exercise that Minix3 is forked, system is added to it and the result it put under GPL3)

Re:Hurd device drivers aren't in user space? (0)

Anonymous Coward | about a year and a half ago | (#42162039)

I thought I read somewhere (and part of why I remember) that Hurd device drivers are also in user space.

Is that wrong?

No, that is correct. That is part of the whole being a microkernel thing.
Microkernels can be pretty nice to work with if designed correctly. There is a reason to why the old Amiga dudes are so fanatic about their OS.

Re:Hurd device drivers aren't in user space? (2)

Giant Electronic Bra (1229876) | about a year and a half ago | (#42162889)

Uhhhhhhh, wait a minute. I was an avid Amiga programmer back in the day. AmigaOS wasn't in any particular sense a microkernel. Such distinctions in fact would be largely meaningless because AmigoOS was written to run on the MC68k processor, a chip which had no MMU nor any facilities for address translation at all (though in theory you could implement storage backed virtual memory it wasn't terribly practical). Every Amiga program was address independent, it could load and run at any address, and all software on the machine, applications and OS necessarily shared a single address space.

There was a considerable amount of message passing between AmigaOS components, but no more than you would expect to see in a modern display manager (IE like X distributes UI events as messages via the X protocol). At a low level you simply made calls into the Amiga system ROM, where pretty much all the kernel functionality was located, and/or tweaked around with hardware directly depending on how friendly your program cared to be WRT to letting other stuff run at the same time.

Most games for instance simply took over the machine and tossed the whole intuition layer out, called ROM directly, and often seized direct control of things like the Copper (advanced DMA controller basically, this drove most of the cool stuff Amiga could do). Such a program would not work at all with other software and you would only run it by itself from its own boot disk usually.

A lot of software would 'play nice' with the rest of the system and only merge its stuff into the copper list via the APIs and keep to whatever memory it was allocated. In that case you had a standard desktop type program, which could either manage an entire screen or live inside a desktop window. A lot of games ran like this, grabbing their own screen but allowing you to run other applications at the same time, return to the OS, etc. Generally productivity apps or utilities might run in actual desktop windows, in which case you would use the intuition GUI toolkit libraries.

Re:Hurd device drivers aren't in user space? (0)

Anonymous Coward | about a year and a half ago | (#42163613)

Wow, so many words and yet you completely failed to understand the microkernel aspects of AmigaOS. It is absolutely not about memory protection. And yes, it is about the extensive internal message passing, which is not as you seem to insinuate only used for the GUI but was a fundamental aspect of AmigaDOS. All IO operations were handled via message queues. So indeed, the user applications called library functions like Open() of dos.library but internally this would then result in extremely efficient multithreaded message-passing to drivers.

So if you had a new kind of hard disk, just put the corresponding .device file in the DEVS: directory. New file system? Same. Want a union-fs? No problem - write a driver and put it in DEVS:. In later versions of the OS the concept was extended to data types. New image format? No problem - write a .datatype file and every image handling application will be able to read and write this kind of image! *That* was one of the visionary and genius parts of AmigaOS.

Certainly, these concepts are found in modern Linux distributions. On one hand you have userland solutions like KIO slaves and you have the kernel side solution FUSE (finally, 20 years after AmigaOS!). And certainly AmigaOS had it's nasty parts like DOS being originally written in BCPL and therefore had all pointers divided by 4. But compared to the complete mess of layers upon layers besides other incompatible layers that is Linux, AmigaOS was incredibly consistent. I pity the people who think that Linux (or any other Unixoid) is a pretty OS (not talking about Windows which is even more terrible).

Thinking about this: I so much wish that there was an effort to write a new sane and consistent OS based on modern C++ (seeing the error handling code in Linux makes me cry). But I know that in my lifetime we will not see such a thing going mainstream. :(

PS: All your talk about how software could access the bare metal without AmigaOS is a red herring to impress the mods. This is completely unrelated to the OS, since it is - by definition - circumventing the OS.

Re:Hurd device drivers aren't in user space? (1)

mrvan (973822) | about a year and a half ago | (#42163939)

Thinking about this: I so much wish that there was an effort to write a new sane and consistent OS based on modern C++ (seeing the error handling code in Linux makes me cry). But I know that in my lifetime we will not see such a thing going mainstream. :(

It seems that Linus has said:

- the whole C++ exception handling thing is fundamentally broken. It's
      _especially_ broken for kernels.

Can you care to elaborate on how you think that C++ error handling would be superior for a modern kernel?

Re:Hurd device drivers aren't in user space? (0)

Anonymous Coward | about a year and a half ago | (#42164001)

One word (well acronym): RAII. Indeed, this makes exception handling safe, but it has fundamentally _nothing_ to do with exceptions and everything with a consistent life-time management.

Example:

void f()
{
      Lock l(x);
      do_something;
      if(error) return;
      File f(y);
      do_something;
      {
            Lock l2(z)
            do_something;
            if(error) return;
            do_something_else; // Lock l2 will be released here
      }
      do_something_completely_different; // All cleanup automatically done by constructors in reverse order.
}

Now do the same in kernel C-style with gotos. And then add a few other resources. Yuck.

In some cases Linus has simply no idea what he is talking about. It's a classic and sad case of "you can't teach an old dog new tricks". And I'm sure he doesn't want us to take everything he says as gospel.

Re:Hurd device drivers aren't in user space? (3, Interesting)

Bomazi (1875554) | about a year and a half ago | (#42162093)

It depends. Hurd itself is an implementation of the unix api as servers running on top of a microkernel. Drivers are not its concern.

The way drivers are handled on a Hurd system depends on the choice of microkernel. Mach includes drivers, so they run in kernel space. L4 doesn't have drivers, so they will have to be written separately and run in user space.

Re:Hurd device drivers aren't in user space? (0)

Anonymous Coward | about a year and a half ago | (#42164091)

Well, since Hurd on L4 is dead, Hurd runs on Mach and therefore the device drivers are indeed running in kernel space.

Hurd, in one line (try this at home kids!) (1)

93 Escort Wagon (326346) | about a year and a half ago | (#42161995)

20+ years in development, still no sound support.

Of course (0)

Anonymous Coward | about a year and a half ago | (#42166977)

..that is the most important issue. "How can I safely hear the youtube kittens purr".

Re:Hurd, in one line (try this at home kids!) (1)

unixisc (2429386) | about a year and a half ago | (#42167573)

Given that Linux sound support is pretty painful - be it ALSA or Pulseaudio, why be surprised?

Re:Hurd, in one line (try this at home kids!) (0)

Anonymous Coward | about a year and a half ago | (#42167733)

That is a poor excuse, and merely makes the question "Why are both linux and hurd crap when it comes to sound support?"

I still haven't found an answer beyond "Free software developers are more interested in rewriting things from scratch than in getting them working". It's depressing.

Re:Hurd, in one line (try this at home kids!) (1)

unixisc (2429386) | about a year and a half ago | (#42167853)

That doesn't explain why the BSD guys get it right. They have few drivers, but once a driver works on one version of an OS, it doesn't have to be re-written for the next.

So it follows HURD coding style? (0)

Anonymous Coward | about a year and a half ago | (#42162831)

Excessively self-referential, impossible to linearize *anything*, and optimizable only in the sense of "I can write a function that takes 12 minutes to shave two instructions off of something no one sane would ever use"?

I've worked with HURD. It wasn't pretty, and there are compelling reasons why the very rigid layers of abstraction in HURD and Genode have never been able to provide anything resembling a stable, running kernel.

Re:So it follows HURD coding style? (0)

Anonymous Coward | about a year and a half ago | (#42169571)

You obviously do not know much about Genode.

The "Genode" OS from Dresden, Germany? (0)

Anonymous Coward | about a year and a half ago | (#42162857)

Is it good at processing large numbers of records of minorities, as they are exterminated?

MMU-based Security Is Wrong Approach (0)

Anonymous Coward | about a year and a half ago | (#42163025)

We should "think out of the box" and look for other approaches to software security. More specifically, Memory Safe Languages. They don't need a MMU to protect each driver's address space. Because you can only write to a certain memory address if you have a proper pointer/reference (whatever you like to call it) to that address. You cannot write outside the bounds of a buffer and you cannot perform insecure casts. Stack, Buffer overruns and nullpointer dereferencing will result in an exception only the "core" kernel can catch.
If the device driver takes too long to complete, an interrupt will generate a timeout exception which can also be caught by the core kernel.

That implies that you do not need an MMU context switch to call into a device driver. Everybody can use the same address space, no virtual memory required for security. Saves transistors, energy and time.

Using Google NaCl-style verification technologies you could even completely rid yourself of the MMU. "Applications" would be treated like "device drivers". Kernel calls are as fast as procedure calls !

Regarding the "overhead" of memory safe languages, I do think it can be minimized to the point of being almost non-existent. For example, you need to perform bounds-checking for simple, (nested) for loops only once. It is not rocket science to create compilers which can do that. Fortran compiler developers have mastered much, much more difficult problems (e.g. for-loop reordering) 30 years ago.

Here is my attempt to create a memory-safe language:

http://sourceforge.net/projects/sappeurcompiler/

It proves you don't need the Java/C# overhead to get memory safety. But yeah, my language/compiler is quite rough around the edges and not polished.

The Skynet Funding Bill is passed. (0)

Anonymous Coward | about a year and a half ago | (#42163585)

The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Re:The Skynet Funding Bill is passed. (2)

lennier (44736) | about a year and a half ago | (#42164967)

The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Skynet responds by posting millions of cat pictures to Facebook. Six billion Internet users collectively go "awww!" and hit Share. First Facebook, then Twitter, then the entire wireless broadband infrastructure collapses under the strain. Without access to GPS, dazed urbanites are unable to find their way to espresso sources and enter simultaneous caffeine and microblogging withdrawal. Riots begin in urban metropolitan areas within the hour. Thirty-six hours later, all major metropolitan areas are a smoking ruin.

We thought it was over. Then from the ashes rose the Hello Kitties.

Meanwhile, in other news (0)

Anonymous Coward | about a year and a half ago | (#42163901)

The Iranian government announced a new type of a manufacturing which they call the "Nuclean Jiad" process.

"Our new factories are so much more efficient," a government spokesman explained. "Most modern factories work on a product which slowly moves down a long assembly line, requiring lots of factory space. Our new, quicker process simply develops a product by spinning in place."

Oh Yeah, Mr Zionist (0)

Anonymous Coward | about a year and a half ago | (#42166965)

Meanwhile Israel steals land at gunpoint from Arabs and sits on 200 nukes plus the delivery systems (modern submarines and long-range cruise missiles).

But thanks for spreading Jewish Lies here. We missed your drivel.

multi-server? (1)

whistl (234824) | about a year and a half ago | (#42164241)

Why does this article use the term "multi-server microkernel OS"? I don't see anything in the article or anything else about Genode referring to multiple servers. Sounds like they're just trying to redefine the term "microkernel"

Re:multi-server? (0)

Anonymous Coward | about a year and a half ago | (#42165819)

I assume it is similar to the hurd in that there is a microkernel and bunch of server processes handling the important things, like drivers, etc.

osFree (1)

unixisc (2429386) | about a year and a half ago | (#42167627)

Can Genode be the basis of osFree - an L4 based microkernel OS that supports 'personalities' like Presentation Manager (of OS/2) and Windows? There is even a Linux personality there, but honestly, anyone who needs a microkernel OS can use Minix3.

TU Dresden and their Informatics faculty (1)

epSos-de (2741969) | about a year and a half ago | (#42164555)

I have spend numerous hours the Informatics faculty in Dresden. They are a true nerd institution. The blob statues are green and the PC labs have direct access to the super computer over the terminal. The supercomputer is hard to crash. I send it broken code and loops and eternal waste of cycles, but it still runs with 95% unused capacity.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>