Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux Gains Two New Virtualization Solutions

CowboyNeal posted about 7 years ago | from the almost-as-good-as-the-real-thing dept.

Operating Systems 170

An anonymous reader writes "The upcoming 2.6.23 kernel has gained two new virtualization solutions. According to KernelTrap, both Xen and lguest have been merged into the mainline kernel. These two virtualization solutions join the already merged KVM, offering Linux multiple ways to run multiple virtual machines each running their own OS."

cancel ×

170 comments

So, will it run Windows? (4, Interesting)

The_Fire_Horse (552422) | about 7 years ago | (#19937645)

just asking...

Re:So, will it run Windows? (2, Insightful)

realdodgeman (1113225) | about 7 years ago | (#19937665)

KVM (have been in the kernel since 2.6.20) already runs windows.

Re:So, will it run Windows? (3, Interesting)

zlatko (222385) | about 7 years ago | (#19939633)

Absolutely, running Windows XP on Linux [linuxinsight.com] is both easy to setup and performs quite well. I'm quite amazed with kvm technology for both reasons. This is not to say that Xen is bad, but it seems so much harder to setup, that I haven't even tried. kvm is dead simple.

Re:So, will it run Windows? (4, Informative)

init100 (915886) | about 7 years ago | (#19939501)

You mean Lguest? FTA:

Lguest doesn't do full virtualization: it only runs a Linux kernel with lguest support.

So the answer is no, Lguest does not run Windows. Xen runs Windows, but only if you have a VT-capable processor. Like Lguest, Xen can run Linux without a VT-capable processor.

Multiple ways to run Multiple OSs (0)

Anonymous Coward | about 7 years ago | (#19937653)

erm.......why?

Re:Multiple ways to run Multiple OSs (1)

tgatliff (311583) | about 7 years ago | (#19937717)

Competition is a wonderful thing!! I suspect three solutions probably will quickly end the vmware / XEN disagreements that went on for so long... :-)

Re:Multiple ways to run Multiple OSs (3, Informative)

Iphtashu Fitz (263795) | about 7 years ago | (#19937825)

A number of reasons. One is to be able to run different linux distros on the same machine for testing purposes. Another is to set up two completely different environments that run tasks at different times.

I used to work for a search engine company (not Google) that has thousands of linux servers. After doing a bit of research they discovered that the vast majority of these machines are idle for a good amount of time. Rather than buy new servers they simply installed Xen and intellegently divided up the physical hardware to perform their different tasks. Now instead of separate physical servers to do web spidering, data analysis, log processing, etc. they've combined these tasks onto the same physical hardware but kept them as individual virtual servers.

Re:Multiple ways to run Multiple OSs (0)

Anonymous Coward | about 7 years ago | (#19937879)

I don't think the parent poster was asking why somebody would use virtualization, but rather asking why there are so many different programs you can use for virtualization.

Solved it three different ways! (1)

symbolset (646467) | about 7 years ago | (#19938025)

Please review Robert Frost: "The Road Not Taken [amandashome.com] ".

Re:Multiple ways to run Multiple OSs (1)

init100 (915886) | about 7 years ago | (#19939519)

Why what? Why multiple virtualization solutions? Because each solution has its own advantages and disadvantages. Use the solution that fits your needs best.

Why? (4, Interesting)

realdodgeman (1113225) | about 7 years ago | (#19937657)

Wouldn't it be enough with one? Or maybe they could have merged all the features into one VM.

I think this will confuse users. Choice is good, yes, but 3 VMs in the kernel? Sounds like overkill.

Re:Why? (5, Insightful)

QuantumG (50515) | about 7 years ago | (#19937761)

Yeah, like all those file systems the kernel supports. What's with that? You only need one. Man. Choice is good and all, but it sounds like overkill.

Don't get me started on buses.. PCI, USB, SCSI, IDE, how many do you need?!

Re:Why? (0, Redundant)

realdodgeman (1113225) | about 7 years ago | (#19937855)

In what way are hardware drivers similar to VM technologies?

More VMs does not necessarily give more OS support. I can understand the need for a VM like lguest, since it does not require a CPU with virtualization technology. But wouldn't it benefit the user more if this where intigrated into KVM instead? I don't know how possible this is, but it sure would do my choice a lot easier.

Re:Why? (3, Informative)

QuantumG (50515) | about 7 years ago | (#19937917)

Which is why I mentioned file systems...

That said, you mentioned KVM.. KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). (from here [qumranet.com] ). It *is* a hardware driver.

Re:Why? (1)

Courageous (228506) | about 7 years ago | (#19939149)

KVM is doing paravirt also, FYI

Re:Why? (3, Interesting)

drinkypoo (153816) | about 7 years ago | (#19938269)

In what way are hardware drivers similar to VM technologies?

in this situation the analogy is clear. As time went on, people discovered new designs for virtualization and decided to implement them. Each design has strengths and weaknesses that make them appropriate for different situations. The same is true of hardware buses; older buses tend to be cheaper to implement. There are exceptions, it's probably cheaper (or will soon be cheaper due to economies of scale) to implement PCI-Express at PCI bandwidth than it is to implement PCI itself. It's certainly cheaper to implement firewire than SCSI (in spite of this, there are practically no native firewire storage devices. But anyway.) (And firewire, which goes up to 800MHz which peaks at 100MB/sec, is superior in most ways to anything up to and including LVD SCSI, including speed, simplicity of cabling, etc etc) Can you tell I have an ax to grind?

But anyway, the point is that we have UML, which runs linux as a process; we have this new lguest, which runs linux as a module; we have xen which is full virtualization without a need for VT, we have kvm which is like xen but does need VT, we have vmware which is also pretty much like xen (and doesn't need VT, although I was under the impression newer versions of vmware would take advantage of it if present, for a speed boost.)

There's some other examples too, but these are enough to talk about right now. Suffice to say that each approach has advantages and disadvantages. But they're useful for different things!

For maximum separation, for example, you could have a Linux that ran servers inside of different UML processes. While exploits in UML would still be possible, this would stop a privilege escalation bug in one server from affecting another. I envision a tool that tracks dependencies and generates the UML filesystem images automatically. Syslogging is done through the virtual network, to the syslog on the core system. Want to test a package? A command to run it in a UML might be as simple as running fakeroot. (fakelinux?) You could do all of this with this new lguest system, instead of UML.

Meanwhile, you're still going to need a full virtualization solution to run non-linux operating systems under Linux (at least until a cobsd (see "colinux") comes out - I forgot about that one for a moment) so there's still a purpose for that.

Re:Why? (1)

init100 (915886) | about 7 years ago | (#19939591)

we have xen which is full virtualization without a need for VT

Actually, Xen uses paravirtualization if VT is not available, and can only run operating systems with Xen guest support in those cases.

we have vmware which is also pretty much like xen (and doesn't need VT, although I was under the impression newer versions of vmware would take advantage of it if present, for a speed boost.)

VMware isn't like Xen, in that it can run unmodified guest operating systems without VT. You are correct in that VMware takes advantage of VT if available.

Re:Why? (1, Funny)

Anonymous Coward | about 7 years ago | (#19938069)

You forgot the school bus

Re:Why? (5, Funny)

QuantumG (50515) | about 7 years ago | (#19938113)

bus error: driver not found.

Re:Why? (2, Informative)

evilbessie (873633) | about 7 years ago | (#19938073)

IDE is not a bus, don't confuse this with ATA (more recently SATA and PATA). IDE == Integrated Drive Electronics.

Re:Why? (1, Informative)

Anonymous Coward | about 7 years ago | (#19938909)

ATA is just a new name for IDE. PATA is a backronym used to distinguish "old" ATA from Serial ATA. As I'm at it, ATAPI stands for "ATA Packet Interface" and is a sub-set of SCSI over ATA.

Re:Why? (1)

init100 (915886) | about 7 years ago | (#19939615)

The electrical interface of IDE is certainly a bus, since it connects more than one device to each channel. On the other hand, SATA is not a bus, it is a point-to-point link, which connects exactly one device to each channel.

Re:Why? (1)

larry bagina (561269) | about 7 years ago | (#19938149)

Yeah, like all those file systems the kernel supports.

It doesn't support ZFS.

Re:Why? (1)

master0ne (655374) | about 7 years ago | (#19938713)

there only "in" the kernel if you compile them in on your next kernel compile, or if your distro compiles them in at install.... if they wernt "in" the kernel, then linux would have no way of understanding those filesystems, or methods of virtualization... asfar as end user confusion, if your technical enough to NEED them, you'll understand what they do and how they work, if not, then having them "in the kernel (or even actually compiled into that spefic kernel) wont hurt anything, as the user will probably never use them. I agree too many choices only creat confusion and a segmented market, but in the case of File systems, and virtualization, the choices dont "compete"... there may be some overlap, but they each solve different problems and needs of different people....

Re:Why? (1, Interesting)

Anonymous Coward | about 7 years ago | (#19937859)

I'm wondering what's NOT going to be put in the kernel eventually. I mean what's next, MPlayer? At what point do we say enough is enough?

Re:Why? (0)

Anonymous Coward | about 7 years ago | (#19937897)

I'm wondering what's NOT going to be put in the kernel eventually. I mean what's next, MPlayer? At what point do we say enough is enough?

If you want a minimalistic kernel with only the bare minimum, the Windows NT microkernel is available for you.

Re:Why? (1)

MrNaz (730548) | about 7 years ago | (#19938137)

The other issue I have is with the inclusion of lguest. It is a highly immature piece of code that is not really usable in anything resembling a production environment.

Why is the Linux kernel being bloated with things that are clearly not going to be used by anyone other than tinkerers and hobbyists? It just gives weight to the Microsoft claims that Linux is for hobbyists. It's one thing for hobby tools to be bundled with distributions like Gentoo, but for there to be code like this directly in the kernel, well, it makes it hard to argue that Linux is a serious kernel for serious applications.

I think that your flippant comment may perhaps be intended to highlight the fact that Linux is not intended to be a microkernel, but nonetheless that does not mean that it should be bloated with everything under the sun. I think that the bloated mess that Firefox has become highlights the fact that just because a program is open source and starts good, does not mean that it can't become a bloated sack of fertilizer through poor technical decision making.

I'm just glad that there are other open source operating systems that have remained purist to their initial goals. While this leads to slower development, it also ensures that they won't one day turn around and realize they've traveled a decade in the wrong direction.

Re:Why? (1)

fritsd (924429) | about 7 years ago | (#19938307)

I didn't realize that iguest is going to be turned on by default. Oh wait.. it probably isn't. If you ever did a

make xconfig

you can see that the very first option is "code maturity level options", and that there are hundreds of features which are by default NOT TURNED ON and therefore do not show up in "anything resembling a production environment". And I'm not talking about kernel modules here, but things like CONFIG_MATH_EMULATION (under "processor type and features" near the bottom of the page) which isn't necessary anymore since intel brought out the i486DX processor in 1989 or so.

Re:Why? (1)

Stephen Williams (23750) | about 7 years ago | (#19937867)

There are already 47,000 or so filesystems in the kernel. Linux has always been about choices.

Just as with filesystems, what will probably happen is that distributions supporting virtualization will pick one. Unless the user selects "super-duper expert installation mode" or whatever, he/she will get the distro's default.

-Stephen

Re:Why? (1, Informative)

Anonymous Coward | about 7 years ago | (#19937989)

For one, they all fill different needs.

KVM allows you to virtualize any PC OS, as long as you have a VT CPU. lguest allows you to run another copy of Linux. Xen sits somewhere in the middle - you can run any Xen-compatible OS, not just Linux, but you can also run normal OSes if you have a VT CPU.

Xen is hardly lightweight. It's really suitable for servers, but it's too intrusive for general use. KVM and lguest, on the other hand, are pretty unintrusive, don't radically change the system, and can simply be used by regular applications. And their functionality doesn't really overlap.

Users will never see them anyway. Now they're part of the kernel, users will just see a program that makes use of them.

Re:Why? (0)

Anonymous Coward | about 7 years ago | (#19938235)

users may never see them, but it increases the kernel footprint and is a potentially exploitable virus/trojan/rootkit vector.

Re:Why? (2, Informative)

SirTalon42 (751509) | about 7 years ago | (#19939629)

It'll only increase the kernel foot print IF you compile them into the kernel, which they won't be enabled by default.

Re:Why? (2, Informative)

init100 (915886) | about 7 years ago | (#19939641)

Only if enabled in the distribution. It doesn't harm anyone to have it available in the kernel source tarball. And both KVM and Lguest are implemented as modules, so if you don't load them, they aren't there.

Re:Why? (1)

rocket22 (1131179) | about 7 years ago | (#19938141)

I think having different options will make virtualization stronger on the linux ground

Gaming applications? (0)

Anonymous Coward | about 7 years ago | (#19938203)

My kids want a new computer to play some of their games on. But my wife and I need a new computer as well. However, our computing needs are quite minor. We mainly browse the Web, and send email. To keep our data safe, we only use Linux. But I also do some Mozilla development which often touches a number of source files, leading to fairly hefty spurts of C++ compilation. So I still need a powerful system, but only for several minutes now and then.

Using this virtualization technology, would it be possible to simultaneously run Windows XP and Linux on the same system, and offer maximal performance for each? Namely, would Windows XP still be able to have sufficient access to the actual video hardware for gaming purposes?

Re:Gaming applications? (1)

init100 (915886) | about 7 years ago | (#19939699)

Namely, would Windows XP still be able to have sufficient access to the actual video hardware for gaming purposes?

AFAIK, Direct3D support is highly experimental in VMware, and I haven't heard of it being available in any of Xen or KVM (Lguest can only run Linux guests, so Direct3D support is a moot point). So the answer is probably no.

Try running your games under Wine instead. It would probably be a safer bet, but it isn't guaranteed to work especially not without hitches. I've read it has improved a lot since I tried it 4-5 years ago, but it isn't 100% complete yet.

Re:Why? (1)

Chris Snook (872473) | about 7 years ago | (#19939011)

Xen has all the features that KVM and lguest have. That's the problem. Xen is extremely complex, and the patches to support it are very invasive. This is why KVM beat it getting merged. LWN infamously predicted Xen could get merged as early as 2.6.10, whereas lguest was only created a few months ago, weighing in at a mere 5000 lines of code.

Xen does some really cool things, but it has a lot of human overhead in terms of management and maintenance that the other two don't have. Now you get to pick the right tool for the job, which is how it should be.

lguest doesn't need VT (0)

physicsnick (1031656) | about 7 years ago | (#19937667)

More importantly, lguest apparently does not require a CPU with virtualization technology. This is exciting news for those of us running on older hardware.

As a cross-platform developer, I'm interested in installing Windows on a virtual machine instead of dual-booting, and the current virtualization technologies don't cut it for me; VMware player is proprietary and doesn't work with my wireless card, QEMU is just too darn slow, and everything else requires a VT CPU. I'm looking forward to trying out lguest.

Re:lguest doesn't need VT (2, Informative)

Anonymous Coward | about 7 years ago | (#19937687)

FYI, Xen hasn't required VT since the beginning either. The only problem was you needed a specially patched kernel because linus didn't like how xen implemented their hooks into the stock kernels. It looks like that has been resolved however.

Re:lguest doesn't need VT (0)

Anonymous Coward | about 7 years ago | (#19937829)

> QEMU is just too darn slow

What are you talking about kqemu is most definitely faster and better featured than lguest at this stage.
  1. Install kqemu
  2. modprobe kqemu
  3. qemu --with-kernel-kqemu [...]
  4. ???
  5. Profit?

Wireless card??? WTF? (1)

brunes69 (86786) | about 7 years ago | (#19937883)

I don't have any idea what you mean by "VMWare Player doesn't work with my wireless card". VMWare doesn't know ANYTHING about your underlying networking hardware. All it uses is the IP stack.

Re:Wireless card??? WTF? (2, Informative)

stef0x77 (529972) | about 7 years ago | (#19937961)

VMWare by default bridges your network interface into the VM. Wireless drivers have such poor support for network bridging that this almost never works. It especially doesn't work with WPA or any such.

If you NAT your VM network traffic, then things work (well sorta, with all the nastiness that NAT comes with).

Re:Wireless card??? WTF? (2, Informative)

physicsnick (1031656) | about 7 years ago | (#19937997)

I have an Atheros chipset wireless card which requires binary drivers to work. It does not work with VMware.

This [launchpad.net] is the Ubuntu bug report (note the length and number of duplicates) which actually breaks apt on installation, but it's not Ubuntu specific; you can't configure it manually with this wireless card either. The only solution is to disable networking virtualization, which means I can't even have VMware use my wired connection unless I disable the wireless card entirely or physically remove it from my system.

Was I seriously modded down for that? Mods, what the hell?

Re:Wireless card??? WTF? (1)

brunes69 (86786) | about 7 years ago | (#19938047)

Just use NAT instead of bridging in your vmware config

Re:Wireless card??? WTF? (1)

physicsnick (1031656) | about 7 years ago | (#19938065)

Obviously I tried that, as have the dozens of other people who encountered this problem. It doesn't work either.

Re:lguest doesn't need VT (0)

Anonymous Coward | about 7 years ago | (#19939551)

>

There are two other options: Virtualbox, which has a GPL'd and a freeware proprietary version (and a very nice GUI), and kqemu which is the kernel module qemu accelerator. Both are faster than normal qemu alone. Also look into seamless virtualization where you use rdesktop to run apps...might need on of those terminal server hacks to remove the license restriction though.

Could somebody clear this up for us? (4, Insightful)

Tribbin (565963) | about 7 years ago | (#19937673)

What are the pro's for heaving two implementations of, seemingly, the same solution?

Re:Could somebody clear this up for us? (1)

Nikron (888774) | about 7 years ago | (#19937683)

One solution will eventually become recognizably superior, and hopefully the other one will get merged out of the kernel.

Re:Could somebody clear this up for us? (1)

Cygfrydd (957180) | about 7 years ago | (#19938913)

Except the Internet clearly demonstrates reverse Darwinism at work: survival of the most idiotic.

@yg

Re:Could somebody clear this up for us? (1)

GreyWolf3000 (468618) | about 7 years ago | (#19937893)

The more people who use both solution, the quicker the kernel team can figure out which one works better, and go with that.

Re:Could somebody clear this up for us? (3, Insightful)

QuantumG (50515) | about 7 years ago | (#19937945)

Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped. So if there are dedicated people working on KVM but no-one actually working on lguest, eventually something will change that results in lguest not working anymore. Eventually people will drop the broken code from their tree until someone fixes it. If no-one fixes it, then it'll never be picked up again. There's no "oh, lguest is actually faster than KVM, we should all work on that".. it's individuals making their own decisions on what to work on (be it that they find it interesting, or they find that bit of code more pretty, or they are paid by someone to work on it) and those individuals are responsible for what happens to that code.

As long as N solutions are maintained there will be N solutions in the kernel. A solution won't be dropped because it performs worse.. or any other "technical" reason.

Re:Could somebody clear this up for us? (1)

GreyWolf3000 (468618) | about 7 years ago | (#19938071)

Good point...but I believe that, over time, the one that most users choose will end up being the most actively maintained.

Re:Could somebody clear this up for us? (1)

bfields (66644) | about 7 years ago | (#19938911)

Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped.

That's a pretty unfortunate situation if the unmaintained code is still actually used by someone. Even if another alternative has come along with a superset of the given features, if they provide different system interfaces--so if it would mean rewriting scripts or applications or retraining users--then the migration can be a pain. And you want people to be able to drop a new kernel into an old working system--otherwise it's hard for them to get security fixes, for example.

So userspace-visible stuff shouldn't really be going into the kernel unless everybody's pretty confident that it can be maintained indefinitely.

That said, yeah, if someone notices that filesystem FooFS has been completely broken for ages and nobody has even noticed, then that's a pretty good argument for dropping it. But even then it's not just because it's unmaintained, it's because at that point you're pretty sure nobody really gives a crap about it.

Re:Could somebody clear this up for us? (1)

MoxFulder (159829) | about 7 years ago | (#19939575)

Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped.


That's a pretty unfortunate situation if the unmaintained code is still actually used by someone.

(...)

That said, yeah, if someone notices that filesystem FooFS has been completely broken for ages and nobody has even noticed, then that's a pretty good argument for dropping it. But even then it's not just because it's unmaintained, it's because at that point you're pretty sure nobody really gives a crap about it.

The Linux kernel *almost never* drops support for any devices/filesystems unless (a) it's INCREDIBLY obsolete and NO ONE is using it, or (b) it's been superseded by something clearly better and there's a straightforward upgrade path.

For example, if you read the kernel changelog summaries on LWN.net, you'll see that support for IBM PC/XT hard disks was only dropped in the last couple years... although they have been obsolete since the late 80s and perhaps literally no one has used them for 5-10 years. And support for the original "ext" filesystem was removed a few months ago, despite the fact that it's been completely superseded by ext2--which was introduced in 1993.

As Greg Kroah-Hartman has pointed out [kroah.com] , the kernel developers are perfectly willing to maintain a driver for which only a single piece of hardware exists in the whole world!

Re:Could somebody clear this up for us? (2, Informative)

sekra (516756) | about 7 years ago | (#19938055)

It's not the same solution because lguest and KVM have different goals. While KVM is trying to use as much hardware virtualization support as possible to gain full speed, lguest is not using these functions to run on more hardware. XEN tries to do everything and is thus a bit more bloated, but also with more functionality. Choice is good, just take the solution which fits your requirements best.

Very fishy and intriguing (1)

jkrise (535370) | about 7 years ago | (#19938089)

"The happy theme of today's kvm is the significant performance improvements, brought to you by a growing team of developers. I've clocked kbuild at within 25% of native. This release also introduces support for 32-bit Windows Vista. "
I can't understand why the Linux kernel development team had 'Windows Vista support' as one of the items on their agenda at all. Virtualisation as I understand it, is basically an abstraction of the hardware that is performed in software. Should not all operating systems be designed to work with standard instruction sets, interrupts, registers and memory?

Why should it be the job of a particular kernel or it's VM component to satisfy specific requirements of a specific version of another kernel (the Vista kernel?). Besides, how exactly did these developers get access to the Vista kernel specs? Should it not be the other way round - i.e. for closed-source Vista to be compatible and optimised for the open-source Linux kernel?

That Linus chose the GPL as a matter of convenience was well known, his antipathy to the FSF is also well chronicled; but this aligning to the interests of specific closed-source kernels from Microsoft is a dangerous new development.

Re:Very fishy and intriguing (2, Informative)

QuantumG (50515) | about 7 years ago | (#19938169)

The people who work on this stuff really wouldn't call themselves kernel developers, but ok, whatever. Associating any of the VM stuff with Linus is even more retarded.. what they do in their own modules is none of his fault or concern. Anyway, some people want to run Vista in a VM on Linux. These VM solutions don't try to virtualize every nook and cranny of the x86 hardware. Vista uses the system level x86 hardware in a slightly different way to XP. As such, it takes some changes to make Vista work.

Should it not be the other way round - i.e. for closed-source Vista to be compatible and optimised for the open-source Linux kernel?
Yeeaaaaaahhhh.. ok. Whatever dude.

Re:Very fishy and intriguing (1)

jkrise (535370) | about 7 years ago | (#19938277)

The people who work on this stuff really wouldn't call themselves kernel developers, but ok, whatever. Associating any of the VM stuff with Linus is even more retarded.. what they do in their own modules is none of his fault or concern.
I find the announcement about these VMs is from Linus himself. Besides, it is Linus who decides which components get into the main kernel tree, so he is answerable for any decisions made.

Anyway, some people want to run Vista in a VM on Linux. These VM solutions don't try to virtualize every nook and cranny of the x86 hardware. Vista uses the system level x86 hardware in a slightly different way to XP. As such, it takes some changes to make Vista work.
If Vista has any idiosyncracies, it should be the job of the overpaid, bloated development team in Redmond to iron out the kinks and make it standards-compliant. Why should it be a concern of the Linux kernel development team? Besides, how did these developers gain access to quirky behaviour of Vista?

Re:Very fishy and intriguing (1)

QuantumG (50515) | about 7 years ago | (#19938349)

I find the announcement about these VMs is from Linus himself. Besides, it is Linus who decides which components get into the main kernel tree, so he is answerable for any decisions made.
Linus puts whatever he wants into his tree, yes. His tree is the defacto "main" kernel tree, yes.

If Vista has any idiosyncracies, it should be the job of the overpaid, bloated development team in Redmond to iron out the kinks and make it standards-compliant. Why should it be a concern of the Linux kernel development team? Besides, how did these developers gain access to quirky behaviour of Vista?
What standards are you talking about exactly? The Intel x86 hardware documentation? I can assure you they are writing their code to those "standards" otherwise their code wouldn't work..

If anything the virtualization guys are the ones who are not implementing the "standards".. as not everything that will run on an x86 processor will run the same way under virtualization. That's simply because it's a lot of effort just to get the most common usage of x86 to virtualize.

Re:Very fishy and intriguing (0)

Anonymous Coward | about 7 years ago | (#19938213)

The Vista support is in the user-space portion of KVM.

There's the KVM kernel part, which does only those parts which can not be done in user space. A user space application can not directly use the VM hardware, nor could it do virtualisation without that hardware. That's why VMWare requires a kernel module. Basically, the kernel component provides an abstracted interface to the hardware virtualisation capabilities, and exposes that interface to user space, hopefully in a secure way that prevents user-space programs from breaking the host OS. It provides little more than pure CPU virtualization.

The KVM user-space part is basically a modified version of Qemu. It's responsibility is to emulate all the rest of the hardware, from interrupt controllers, up through the motherboard, busses, peripherals, and I/O devices like video, sound, networking, keyboards, mice, access to the host's USB devices, disk drives, and all the other stuff. It doesn't do CPU emulation, because it uses the kernel component to virtualize the host CPU. Even the BIOS is contained here.

Vista wasn't working before because the user-space component didn't implement everything it needed to run properly. At the very least, there were some issues with ACPI, and probably a few other bits. This is now fixed, and Vista now works correctly.

Re:Could somebody clear this up for us? (5, Informative)

Chris Snook (872473) | about 7 years ago | (#19938927)

These aren't even close to the same solution. KVM provides hardware-assisted virtualization, with Linux as the hypervisor. Lguest provides linux-in-linux paravirtualization (no hardware support), and is extremely lightweight (5000 lines of code, total), but lacks many advanced features. Xen provides both paravirtualization and full virtualization, runs under a custom hypervisor intended to run multiple different OSes (Linux, Solaris, Windows, etc.) simultaneously, and has a plethora of sophisticated features, such as live migration (and all the maintenance headache of the correspondingly huge codebase).

They each fill very different niches, so there are very good reasons for having all 3 in the kernel.

legality (1)

efceeveea (1128063) | about 7 years ago | (#19937721)

Isnt it illegal to run windows with this? Googled it n microsoft seems to think so.. MelNews [slashdot.org]

Re:legality (0)

JamesRose (1062530) | about 7 years ago | (#19937931)

You do realise microsoft claims patent infringements about linux all the time, and as such, even running linux without any virtualisation software would be illegal in microsoft's eyes.

Re:legality (0)

Anonymous Coward | about 7 years ago | (#19938177)

You do realise microsoft claims patent infringements about linux all the time, and as such, even running linux without any virtualisation software would be illegal in microsoft's eyes.

You do realize slashdotties claim all sorts of stupidity all the time, and as such, it's a crime the way some gullible chumps think it means something.

Re:legality (1)

throup (325558) | about 7 years ago | (#19939067)

Despite being modded down to -1, I think this needs treating as a legitimate question:

Isnt it illegal to run windows with this? Googled it n microsoft seems to think so.. MelNews
Illegal? That depends on your definition of legal... different nations have different laws.
Breach of software license? Possibly... if I recall correctly, the EULA for Vista forbids running in a virtualised environment. I believe it is perfectly legitimate to run XP this way as long as the license key has been purchased legally and is not currently in use in another installation (obviously with the exception of multi-user licenses). For other versions of Windows, it depends on the EULA but I think Vista is the only one to forbid it.

I RTFA twice and thought to myself... (0, Flamebait)

PrimeWaveZ (513534) | about 7 years ago | (#19937723)

Wow, there are now three VM solutions built right into the kernel? What are they going to do next? Merge emacs?

Re:I RTFA twice and thought to myself... (1)

chabotc (22496) | about 7 years ago | (#19937843)

it might be worth remebering that the _kernel_ part of these VM solutions have been merged into the kernel, and not the userland tools (they are seperate packages). A VM needs certain kernel hooks for the hardware virtualization, hence the need for a kernel 'driver(s)', and the VM scheduling happens there too.

So the comparisment with emacs is very inaccurate, emacs is a userland tool, and doesn't have kernel modules :-)

Re:I RTFA twice and thought to myself... (4, Funny)

brunes69 (86786) | about 7 years ago | (#19937903)

I once considered writing a kernel emacs accelerator module, but later decided it would be easier to just run Linux inside of emacs!

Re:I RTFA twice and thought to myself... (0)

Anonymous Coward | about 7 years ago | (#19938457)

What are they going to do next? Merge emacs?

Yes - with vi.

But the real question is... (0)

Anonymous Coward | about 7 years ago | (#19937727)

...will it run Linux?

linux is for gays (-1, Troll)

Anonymous Coward | about 7 years ago | (#19937753)

see subject.

What about kqemu? (0)

Anonymous Coward | about 7 years ago | (#19937803)

It's full featured doesn't require CPU VT support and is widely used (ie: tested).

Is the linux kernel community going through a NIH stage?

Re:What about kqemu? (1)

QuantumG (50515) | about 7 years ago | (#19938003)

If kqemu want to integrate their kernel components into the kernel they can. It's not the Linux developers going out looking for things to add to the Linux kernel... or them developing their own solutions.. or anything like that. All of these technologies have been added to the kernel tree by the people who maintain them.

As a testament to my lack of knowledge... (1)

jimktrains (838227) | about 7 years ago | (#19937871)

...why should virtualization technology be incorporated into the kernel, and not kept outside, as a "3rd" party app? Shouldn't the kernel be essentially a library and some low level support (multi-tasking, handle certain interrupts, that sort of stuff)? I've never really even considered bash, or even ls as part of the kernel. Am I just really mistaken, or is the word kernel used more broadly than that?

Re:As a testament to my lack of knowledge... (0)

Anonymous Coward | about 7 years ago | (#19937933)

Because THAT IS NOT HOW LINUX IS DESIGNED.

Go read up on Monolithic vs Microkernel design and then you'll know.

FYI The vm stuff in the kernel is miniscule, if you want linux to do something that'll actually save you space, have them strip out all the broken arches from the kernel tree, or better yet everything other than x86, so we don't end up with a 300 meg source tree only 100 or less of which we actually use!

Re:As a testament to my lack of knowledge... (1)

jimktrains (838227) | about 7 years ago | (#19938063)

My question has nothing to do with monolithic vs microkernel. My question has to do with why are these programs being including with the kernel.

Only one hardware branch of the kernel gets compiled, and yes, I know I can choose not to compile many things into the kernel, and do so whenever I compile it.

See the post below you for an answer that was helpful. Compare that to your answer, and figure out how to answer a question instead of trying to belittle someone.

Re:As a testament to my lack of knowledge... (2, Insightful)

QuantumG (50515) | about 7 years ago | (#19937969)

The hardware support for virtualization is in the kernel.

Just like the hardware support for webcams is in the kernel.

Re:As a testament to my lack of knowledge... (1)

jimktrains (838227) | about 7 years ago | (#19938043)

See, now, that would make sense. So it's not the entire virtualization programs, just hardware hooks and drivers, basically? Meaning that there still needs to be a separate program to take care of actuality running things and what not?

Re:As a testament to my lack of knowledge... (2, Informative)

QuantumG (50515) | about 7 years ago | (#19938067)

Yes. Thing is, bare x86 metal can do virtualization.. you just gotta be creative. There's a lot of ways to do it, utilizing different parts of the hardware. So there's some solutions that work great for some things and some solutions that work great for others. It's like having two drivers for the same bit of hardware and choosing which one to use based on how you're using the device.

Then there's para-virtualization.. modifying the kernel of the guest OS so you don't even need anything in the kernel. Well, sometimes kernel support can help para-virtualization :)

Re:As a testament to my lack of knowledge... (1)

jimktrains (838227) | about 7 years ago | (#19938103)

Thanks, that all makes sense now.

Re:As a testament to my lack of knowledge... (1)

Chris Snook (872473) | about 7 years ago | (#19938973)

You're thinking of a microkernel. Most modern operating systems have a monolithic virtual memory model, in that a large number of system services run in the kernel memory space, but they use dynamic linking to achieve a degree of modularity. That said, the Linux kernel internal API is fairly fluid, so any code that runs in kernelspace has to be maintained quite regularly to keep up with the changes. Merging your code into the main tree makes this much easier.

Bash and ls are still userspace. All of these virtualization implementations have userspace tools that control them, but they need some help in kernelspace to set up the virtual memory mappings, and that's the code that's been merged.

does anyone actually use a VM.... (0, Troll)

LingNoi (1066278) | about 7 years ago | (#19937937)

... on the desktop? I only have Ubuntu installed and I don't see why a VM is such a massive feature these days? Have I missed something amazing that I can do on these or is it simply for a cool "hey I can run a desktop on a desktop!"

I understand that application compatibility is a big deal but Linux has a zillion apps already.

I just don't get all the marketing surrounding it.

Re:does anyone actually use a VM.... (2, Informative)

billbaggins (156118) | about 7 years ago | (#19937965)

It's a big help for software developers needing to support multiple platforms/versions. At my company we provide support for the past 5 or 6 versions of our software, so I have a VM for each version that I fire up when I need to check something or patch a bug. Lots easier than dealing with multiple physical machines.

Re:does anyone actually use a VM.... (1)

raftpeople (844215) | about 7 years ago | (#19939389)

I understand the desire for VM's so this question really isn't about that, but why can't you have 5 or 6 versions of your software on 1 box? When I worked for an ERP company it was pretty common for our servers to have multiple versions of the software.

Re:does anyone actually use a VM.... (0)

Anonymous Coward | about 7 years ago | (#19938097)

I use them for development and testing, no way do I want to litter my main install with software I only want to try out. There are further potential advantages such as deployment, portability and COW [wikipedia.org] images. Stick to /home/ling/beastiality-pics/ if you don't see any advantages.

Re:does anyone actually use a VM.... (1)

ShieldW0lf (601553) | about 7 years ago | (#19938245)

If you still need access to Adobe products like Photoshop for print production, like my GF does, there's nothing available on Linux that will do the job.

Linux + Xen + W2K lets her leave the windows desktop and still use these tools.

Pretty straightforward.

Yes.

Re:does anyone actually use a VM.... (2, Informative)

drinkypoo (153816) | about 7 years ago | (#19938303)

I only have Ubuntu installed and I don't see why a VM is such a massive feature these days?

I have vmware installed and use it on a regular basis. Here's what for:

  • Windows emulation. Wine is great and good, but it doesn't run everything. Sometimes I want to run some Windows software not supported by Wine. Mostly this takes the form of various (non-3d) games. I have Windows 98 and Windows 2000 VMs. Also cellphone hacking can pretty much only be done under Windows (at least for Motorola) - it's possible to flash only like one format of software image under Linux, whereas I can handle about five on Windows.
  • Linux testing. I can test a LiveCD in a virtual machine without even burning the ISO.
  • Appliances. Excellent for testing/development. I made a Debian LAMP appliance, for example, with everything I needed to run Drupal. When you don't need it, it's turned off, and preventing potential security risks and avoiding using any resources (not than an Apache site not getting hits is using a lot of resources.)

I've talked about it elsewhere, but I also envision a system using UML (or now, lguest) to separate servers (or groups thereof) away from the main system to reduce security risks. It would let you use selinux with a fairly restrictive policy on your controlling system, and if one of the subsystems is compromised it could easily be discarded and rebuilt.

Re:does anyone actually use a VM.... (1)

moco (222985) | about 7 years ago | (#19938409)

Why have virtualization on the desktop? Good question. Here are a few answers I can think of:

  * Software development, as it has been mentioned in this thread.
  * Testing "stuff", a sandbox to play in before messing with the system, "stuff" being other operating systems, applications, services.
  * security, the secure vm and the unsecure vm running on the same physical hardware.
  * Corporate environments, the user's machine is a vm that can be ran on any of the physical PCs on the network.

Re:does anyone actually use a VM.... (1)

Chris Snook (872473) | about 7 years ago | (#19939079)

I do. It's delightfully convenient if you do development work, because you can run tests in something a lot more realistic than a chroot build directory. It's particularly nice if you're doing kernel work. For cluster testing, the only alternative involves $20k worth of hardware.

So, Joe user may not need this, but it's a major feature for the people who work on improving the Linux kernel. That alone justifies including these features.

yes but (0)

Anonymous Coward | about 7 years ago | (#19938035)

Does it run on linux?

Does it run on linux?

GPU support question (4, Funny)

JustNiz (692889) | about 7 years ago | (#19938123)

So do any of these solutions support 3D graphics (nvidia) hardware?
The only reason I currently have a windows partition at all is for gaming.

Being able to run Windows 3D games in a VM would allow me to move to a Linux-only box and also give me a nice way of:
* managing the way windows keeps grabbing diskspace
* remove the need to go through reinstalling/reactivating windows every 6 months or so
* limiting the damage Windows virusses can do
* limiting all the phone-home comms with Microsoft that windows keeps doing

Re:GPU support question (2, Informative)

QuantumG (50515) | about 7 years ago | (#19938243)

No. But if/when there is ever an open source nvidia kernel driver with 3d support that isn't completely broken and is integrated into the kernel, you might see some people take an interest in virtualizing it.

Probably the first thing they'll do is make it so X running in a virtual machine can share the same DRM (Direct Rendering Module) as X running on the host. Of course, that's not much good to a Windows guest.

Re:GPU support question (2, Interesting)

EvilRyry (1025309) | about 7 years ago | (#19938549)

So do any of these solutions support 3D graphics (nvidia) hardware?
The only reason I currently have a windows partition at all is for gaming.

I recently read an article on the progress of just this. It sounds pretty cool and the initial results are impressive. This combined with the DX->OpenGL Wine code, that I'm sure will be open sourced from the makers of parallels (just had a slashdot story on this), makes for an exciting future for providing hardware acceleration to guest applications.

More information: http://www.cs.toronto.edu/~andreslc/vmgl/ [toronto.edu]

Re:GPU support question (1)

Chris Snook (872473) | about 7 years ago | (#19939143)

Not very well. Xen with PCI pass-through might work here, but that requires having a dedicated graphics card for each OS. 3D video generally involves some amount of writing directly from userspace to hardware, without any kernel interaction after initial setup. This is difficult to do right in all cases with virtualization, but they are working on it.

Just buy Cedega and be done with it.

User Mode Linux (0)

Anonymous Coward | about 7 years ago | (#19938249)

I would like just to mention User Mode Linux (http://user-mode-linux.sourceforge.net/), that was included in the kernel mainstream a lot of time ago (much before KVM, I remember).

Clarification of these technologies (4, Informative)

GiMP (10923) | about 7 years ago | (#19938475)

Each of Xen, KVM, lguest, and UML can be considered virtualization products but they are all vastly different. Below I describe each of these products in relation to their inclusion to the Linux kernel.

Xen - the Linux kernel supports code allowing it to be run as a guest underneath the Xen kernel, all through software. Linux's support for Xen does not make Linux a virtualization platform, only a GUEST for the Xen kernel which sits at Ring-0. (though a "dom0" Linux system can interact intimately with the Xen kernel, it actually sits at Ring-1). I should note that the Xen kernel also supports hardware virtualized domains, though this is unrelated to the patches to Linux.

KVM - the Linux kernel supports virtualization of guests through hardware extensions, this requires supported hardware. Linux becomes the Ring-0 kernel.

lguest - (my understanding is) an unmodified Linux kernel can act as a hyper-supervisor through loading Linux kernels as modules. Linux sits as both Ring-0 (supervisor) and Ring-1 (guests). This is experimental with limited features and only supports Linux guests.

UML - the Linux kernel becomes a userspace program. This allows Linux to run as an executable application/program. With UML, Linux can be compiled for a Linux or Microsoft Windows target. The executing OS sits at Ring-0 and the UML program sits at Ring-1. This has the advantage of requiring no modifications to the host OS and is very portable (you could email an entire Linux system to a friend without requiring anything installed to their system), but the disadvantage of poor performance.

From a high-level, the products UML, Xen, and lguest are actually very similar in function. They act as architectures to which Linux can be compiled in order to make it a guest OS of another Ring-0 kernel. These architectures provide the targets of a kernel module (lguest), a userspace program (UML), or a xen-domU guest (Xen). On the other hand, KML is the only patch that is intended to add support to Linux to act as a Ring-0 kernel on behalf of guest systems -- and even then, KML can be viewed more as a hardware driver for the processor extensions.

Re:Clarification of these technologies (4, Informative)

_Knots (165356) | about 7 years ago | (#19939209)

Slight corrections:

The UML program sits at ring-3 on X86 machines: it's just a normal user program using the ptrace() mechanism and extensions [except when the host has been patched with SKAS, but even here it's just a "normal user program". Rumor has it that SKAS might eventually make it into mainline, but it's time in 'real soon now' is starting to rival Duke Nukem Forever's.]. Rings 1 and 2 are odd, rarely used (IIRC there's the current virtualization craze and OS/2 as notable consumers) features of the x86, derived from MULTICS. For processors with only two (user & supervisor) modes, identify ring 0 with supervisor mode and the other rings with user mode.

It is a little odd to say that Linux "becomes" the Ring-0 kernel under KVM. It was already running in ring 0.

Re:Clarification of these technologies (3, Interesting)

Per Wigren (5315) | about 7 years ago | (#19939239)

Yes, they are all very different but at the same time quite similar from a user's perspective. All of them (unless I've missed something) more or less emulate a whole machine. This means you have to mess with disk images or dedicated drives/partitions/LVs, allocate a fixed amount of RAM to the guest, among other things.

Personally I like the approach of OpenVZ [openvz.org] and VServer [linux-vserver.org] better. The main OS and the guests all share the same kernel, share the RAM and their root filesystems can be just subdirectories of the host's filesystem. When inside the virtual server you don't realize that though. You only see your own processes and everything works as if it was a dedicated server. You can run iptables, reboot and just about everything you could normally do in XEN/KVM/VMWare. Including live migration of virtual servers to other physical hosts. chroot on steroids.

I really hope OpenVZ and/or VServer will be merged at some point. VServer seem to keep up with current kernel releases so that wouldn't be too hard to merge I guess. OpenVZ usually have a lag of something like half a year.

Re:Clarification of these technologies (1)

radarsat1 (786772) | about 7 years ago | (#19939299)

Hope this isn't too far off topic...


Xen - the Linux kernel supports code allowing it to be run as a guest underneath the Xen kernel, all through software. Linux's support for Xen does not make Linux a virtualization platform, only a GUEST for the Xen kernel which sits at Ring-0. (though a "dom0" Linux system can interact intimately with the Xen kernel, it actually sits at Ring-1). I should note that the Xen kernel also supports hardware virtualized domains, though this is unrelated to the patches to Linux.


I'm not too familiar with all these virtualization solutions, but this struck me as being somewhat reminiscent of how RTLinux worked for Linux 2.4. I haven't really needed hard RT since the preemptive scheduling in 2.6, but I was wondering if some of this virtualization stuff has been used to implement hard real-time?

(In RTLinux, the Linux kernel ran at a lower priority than your RT code --- this is distinctly different from the soft RT that can be achieved with a high-priority user-mode process in 2.6, despite that fact that you can basically achieve millisecond timing that way.)

but (1)

thatskinnyguy (1129515) | about 7 years ago | (#19938807)

But will it run on... nevermind!
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...