Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virtualization In Linux Kernel 2.6.20

kdawson posted more than 7 years ago | from the performance-numbers dept.

Upgrades 178

mcalwell writes with an article about the Kernel-based Virtual Machine (or KVM for short) in the release candidate Linux 2.6.20 kernel. From the article: "[T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. [KVM] is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the KVM for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen."

cancel ×

178 comments

Sorry! There are no comments related to the filter you selected.

lawlzor (0, Troll)

nude-fox (981081) | more than 7 years ago | (#17535750)

i r teh pwning j00

Oddness in kernel release cycle (2, Interesting)

CRCulver (715279) | more than 7 years ago | (#17535760)

For 2.6.19, there's only been a single patch so far (2.6.19.1). Usually there are more. Was 2.16.19 unusually unproblematic, or has attention been drawn away by the development of new features for 2.6.20?

Re:Oddness in kernel release cycle (2, Informative)

EzInKy (115248) | more than 7 years ago | (#17535804)

I've seen a lot of mentions of file corruption on their mailing list, even with ext3.

Re:Oddness in kernel release cycle (2, Informative)

Spoke (6112) | more than 7 years ago | (#17536442)

The file corruption talked about has been in the kernel for some time, but recent changes made it more visible and easier to trigger. It should be fixed in the latest 2.6.20rc kernel.

If you search the kernel archives for ext3 corruption you'll find a couple long threads discussing the issue and the solution.

Re:Oddness in kernel release cycle (2, Informative)

marol (734015) | more than 7 years ago | (#17535812)

Quoting Torvalds from the 2.6.19 release announcement:
'So go get it. It's one of those rare "perfect" kernels.'

Re:Oddness in kernel release cycle (1)

Askmum (1038780) | more than 7 years ago | (#17536836)

I remember a high 1.1 kernel (1.1.81?) which was announced in a same manner.
It turned out to be the worst since 0.1.

Re:Oddness in kernel release cycle (5, Informative)

arivanov (12034) | more than 7 years ago | (#17535924)

No, the attention has been drawn from people actually giving a fuck.

Kernels from 2.6.9 onwards are a disaster.

  • PIO IDE causes a deadlock on Via chipsets under heavy IO from 2.6.11 onwards. Worst in 2.6.16, but still reproducible on others.
  • IDE TAPE no longer works from 2.6.10 onwards
  • IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16
  • LONGHAUL is broken to some extent since 2.6.9
  • There is a change in fundamental APIs - termIO (2.6.16), locking (2.6.15), scheduling(every second f*** kernel), etc every release so it is takes a fully blown porting effort and untangling unrelated changes to backport fixes to a driver.

The original idea was that "distributions will fork off and maintain kernel for releases". This idea has degenerated into "only distributions can fork and maintain a kernel". Sole developers and hobbyists are being treated the same way Microsoft treats them - as a "one night stand". In fact, even distributions are unable to keep up with that. Fedora has half of these bugs in it. So does etch, so does mandriva and all other lesser distributions. Only RHELL and Suse ship something reasonably useable and it is 1 year behind on features.

Reality is that anything past 2.6.9 should be called 2.7.x and that is it. And it may be seriously worth it to consider Gentoo/BSD or Debian/BSD. While the BSD crowd has its own failings, it does not change fundamental APIs for entertainment purposes every month on the stable branch.

Re:Oddness in kernel release cycle (2, Funny)

ArsonSmith (13997) | more than 7 years ago | (#17535938)

Way to go Linus. Tell them distros to Fork off!!!

Re:Oddness in kernel release cycle (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17535990)

Hey to break it to you buddy but 2.6 is NOT the stable branch. 2.4 is. Come back and talk to us about linux kernel development when you ACTUALLY understand what is happening.

Re:Oddness in kernel release cycle (2, Informative)

Ryan Mallon (689481) | more than 7 years ago | (#17536050)

Umm, what? According to http://www.kernel.org/ [kernel.org] 2.6.19.1 is the latest stable version. Stable versions are denoted by having an even number for the major revision, odd numbers are for development versions.

Re:Oddness in kernel release cycle (5, Insightful)

Builder (103701) | more than 7 years ago | (#17536610)

That information is outdated really. The main developers decided that we wouldn't have a development kernel anymore, and would instead just develop in the stable tree. Genius! Now we have all the benefits of an unstable API / ABI combined with the benefits of flaky support... Go team!

Re:Oddness in kernel release cycle (2)

advocate_one (662832) | more than 7 years ago | (#17537124)

yup.. I'm pig sick of hardware not working after kernel updates. Going to Ubuntu Dapper from Breezy lost me DVD burning... I daren't "upgrade" to Edgy for fear of something else more vital breaking... and I've pinned my current kernel as well, as I'm sick of having to re-install nvidia and VmWare every security update...

I really, really wish they'd go back to the "proper" even stable, odd development cycle. Distros had a chance then and could backport what they wanted from the development tree.

Re:Oddness in kernel release cycle (1)

rbanffy (584143) | more than 7 years ago | (#17538850)

I keep a second partition for those things. Now, after going dapper->edgy, mine is used to hold videos, but, before it, it held my Dapper root. In a couple months, it will hold a Feisty Fawn installation so I can get my feet wet. I keep jumping partitions and trying not to dist-upgrade. As much as I love APT, I don't trust it blindly.

Mod... Parent... Up (4, Insightful)

Builder (103701) | more than 7 years ago | (#17536622)

I feel your pain, deeply! A stable API / ABI is absolutely vital for ISV support and the new development model means that you can only get this if you're prepared to pay a large amount of money for your distribution. I don't want to have to pay $1500 for RHEL, but that's the only way I can run an Oracle dev server on a quad box with 16GB ram. The amusing thing is that RHEL is the ONLY piece of software I have to pay for on that machine - our site license gives us free licenses for dev and DR :)

Anyone other than SLES or RHEL is a second class Linux citizen today. Without vendor support you can forget about trying to run a stable Linux kernel anymore. Bring back the old odd / even split!

Re:Mod... Parent... Up (2, Interesting)

Anonymous Coward | more than 7 years ago | (#17536650)

Just use Solaris. You get to run all the Lunix source and binaries and all the Solaris ones too, the ABI is stable over many years and it has many more useful feaures than Lunix. Also the virtualisation stuff has been in Solaris a lot longer. Oh, and it handles SMP and NUMA better, and it has ZFS.

Re:Mod... Parent... Up (3, Informative)

Kjella (173770) | more than 7 years ago | (#17536864)

Anyone other than SLES or RHEL is a second class Linux citizen today. Without vendor support you can forget about trying to run a stable Linux kernel anymore. Bring back the old odd / even split!

Well, first off there's CentOS if you don't need the support. Secondly, while the kernel guys are happy hacking away at 2.6.x, there are other distributions like Debian and Ubuntu LTS which will support a stable API/ABI for several years.

Yes, now 2.6 keeps breaking but does anyone remember the bad old days when distros were backporting hundreds of patches from 2.5 to 2.4? What the distros are shipping now is closer to a vanilla kernel, for better and for worse. They pick one version, stay with it and stabilize it. That'll what SLES, RHEL and all the other distros do.

Re:Mod... Parent... Up (1)

Builder (103701) | more than 7 years ago | (#17537416)

I agree on CentOS - I should have mentioned that, my bad.

With that said, what is the cost of these distros providing long term support? Firstly, there is more and more divergence between the distros over time. The patches that each comes up with the backport specific security features will be different, if only slightly. The patches that each comes up with to backport a highly requested feature will be slightly different. Over time these slight differences will add up to become real differences between the distros.

We don't want fragmentation - we want to know that if something works on 'Linux' it should work on any distro we choose. Getting the userspace right was hard enough, but the LSB went some way towards standardising libraries, etc. Now that we have userspace on the mend, the frikking kernel starts going of at tangents all over the place.

Just look at the differences between a SLES and a RHEL kernel - fragmentation is already starting. And I don't want to know how both of these differ from Ubuntu :(

I miss Alan Cox maintaining the stable kernel tree. Doing maintenance isn't sexy or cool, but he was bloody good at it and with him, stability was a primary concern, not new features.

Re:Mod... Parent... Up (4, Insightful)

gmack (197796) | more than 7 years ago | (#17538408)

The patches that each comes up with the backport specific security features will be different, if only slightly. The patches that each comes up with to backport a highly requested feature will be slightly different. Over time these slight differences will add up to become real differences between the distros.

Distros should NEVER backport features. That's the whole point of the new development system. If you want a stable kernel stay with the point release your on and just add the security/stabillity patches. If you want new features use a newer kernel.

That right there was the exact problem with the old even/odd split. The time between the two ended up being so great that people/vendors would start backporting features and destabilizing the "stable series" kernel.

Distros forking the kernel has always been an annoyance so it's nothing new either. I've been playing the "wich distro has the drivers I need" game since 2.0.x and it got to the point where I just never use distro kernels anymore I just compile my own and add that to the installer.

Re:Mod... Parent... Up (1)

kv9 (697238) | more than 7 years ago | (#17536904)

I don't want to have to pay $1500 for RHEL, but that's the only way I can run an Oracle dev server on a quad box with 16GB ram.

couldn't you just download 50CentOS?

Re:Mod... Parent... Up (2, Funny)

TheLink (130905) | more than 7 years ago | (#17538124)

Shush. Let him keep paying for it.

Then you keep getting free all that work he's paying for :).

Re:Mod... Parent... Up (2, Informative)

thue (121682) | more than 7 years ago | (#17537098)

If people really wanted the old stable versions then they would be using 2.6.16.y, which is still being maintained using the same old stable policies as 2.4

http://en.wikipedia.org/wiki/Linux_kernel#Versions [wikipedia.org]

The fact that most people don't seem to run 2.6.16 seems to indicate that people are happy to forgo some stability in exchange for having the new features in the latest 2.6.x kernel available now.

Re:Mod... Parent... Up (1)

Builder (103701) | more than 7 years ago | (#17537452)

The 2.6.x.y tree is there to solve a completely different problem to what was solved by the 2.even.x and 2.odd.x scheme.
With 2.6.x.y, only fixes to that kernel are added. No new features are added. Ever.

With the 2.even.x tree, new features were added, but they were stabilised first. The aim (although not always achieved, see NPTL threads for example) was to NOT break the API / ABI during the life of that kernel series. So if I had a driver or a piece of software that worked on 2.4.1, it should STILL work on 2.4.16. My graphics card shouldn't stop working just because I upgrade my kernel.

Like I say, this wasn't always the case and the NPTL threads issue caused me no end of nightmares. Hint - never set NPTL_VERSION=2.4.1 and the install RPMs :) But it was better than it is now and at least the developers were making an effort to provide something that people could download and compile themselves and use. Now, they just have the distros do that, and the hobbyist is out in the cold. More importantly, as I said in my other post, this is causing fragmentation between distros over time.

Here's a grand thought! (0)

Anonymous Coward | more than 7 years ago | (#17538028)

Remember that little thing called the gpl?

Yep, basically if you're not happy with the changes that the linux devs have been doing, then fork it and seeing how things are currently, I'm sure the sane linux devs would jump aboard rather quickly.

I agree fully, I used to grab linux kernels for certain things, now I'm glad I use a distro that maintains kernels for me, linux as of late has become a clone of microsoft in all of the bad ways. Linux is becoming broken for the sole reason that someone was too lazy to create a new branch for a new kernel and said "fuck it, people want to play with the unstable code, so let's pollute the stable tree with buggy code!"
I recently had trouble compiling a new kernel source because of inane errors, when I asked about this, I was given the prompt "go fuck yourself"

When did this attitude start? when I started using linux, everything was set apart neatly, you knew the kernel you were downloading was stable and that anything wrong would be remedied, and no unstable code was in the fresh tree. I see why some people have chosen to stick with 2.4 (though it only works great if you dont use a desktop with modern features..)

Seriously, this is embarrassing, now microsoft CAN call linux out for being no better.

I pray someone with some coding skills will actually try to fork the kernel and organize it in a sane and stable manner, whilst keeping compatibility.

It's kind of hard to think of, but considering Xfree86 is now fading into obscurity after being dethroned roughly 2 years ago by xorg, which created a friendlier, more active, and sane fork, I see something like a linux fork not being a far fetched idea. It's bound to attract some developers from the main source, possibly.

Re:Oddness in kernel release cycle (0)

Anonymous Coward | more than 7 years ago | (#17536770)

When i go to http://bugzilla.kernel.org/query.cgi [kernel.org] , and enter some keywords like "LONGHAUL", "IDE TAPE", "IDE SCSI" and "VIA", i cannot find any bugs or critical failures you have mentioned. Where did you get this information?

Re:Oddness in kernel release cycle (0)

Anonymous Coward | more than 7 years ago | (#17538710)

LKML.

Indeed. Theres are reasons Slackware is still 2.4 (1)

Viol8 (599362) | more than 7 years ago | (#17536952)

And the reasons you cite above are some of them. People think Pat and the team are stuck in the past but he probably has a better handle on how linux kernel dev has gone down the toilet with 2.6 than many people.

Re:Oddness in kernel release cycle (2, Informative)

gmack (197796) | more than 7 years ago | (#17538272)

IDE-SCSI no longer works from 2.6.10 onwards at least up to 2.6.16

IDE-SCSI never worked properly. I've had constant problems with it since I started CD burning on Linux. Thankfully it is now obsoleted by the new ATA drivers since the ATA devices just shows up on the system as a SCSI device. If you really need to have SCSI support for IDE devices I highly suggest trying the new drivers.

Re:Oddness in kernel release cycle (1)

davek (18465) | more than 7 years ago | (#17538644)

This worries me, a lot. I remember how pissed I was when I first jumped back into Linux a few years ago, and tried to compile a device driver. I quickly realized that EVERYTHING that I had spent months learning back in college about linux devices was now completely bunk. This is open source, isn't it? The whole point is to be able to hack it. You can't hack it if you have to learn an entirely new API every few months.

Perhaps its time to stop the Linus-worship anyway, and go with the HURD:

http://www.gnu.org/software/hurd/gnumach.html [gnu.org]

-dave

What about BSD/BSD? (0, Troll)

Generic Player (1014797) | more than 7 years ago | (#17539248)

Why would you want the horrible mess of GNU bloatware and random crap that is a debian or gentoo userland? Just try one of the BSDs, they have much nicer userlands already on their own, no need for Debian/ or Gentoo/ at all.

Re:Oddness in kernel release cycle (3, Informative)

jnana (519059) | more than 7 years ago | (#17536250)

I'm not sure in general, but I've been happily using 2.6.19 for a while with no issues.

As for kvm, I downloaded it about a week ago and manually built and installed it (on 2.6.19), and I've had no trouble with it at all. It was very easy to build and install following the instructions [sourceforge.net] , and creating images and installing a new os on them is trivial. I set up a couple of images for experimenting with ubuntu and fedora (my main os is gentoo), and I set up another image on which I installed Plan 9, just to play around with that a little.

Simple Q: will this run Win XP as a guest? (2, Insightful)

Anonymous Coward | more than 7 years ago | (#17535786)

Cutting right to the chase here, if I have this new kernel, and a CPU that supports it (only the latest generation from Intel and AMD do), I should be able to install Windows XP as a guest OS and run it in a window on my Linux machine? That would be very cool and could really help the adoption of Linux. I know I can do something like this with VMWare right now, but if it's built in to the kernel that would be even better. And yes I would have to buy a new machine with one of these current-generation CPUs to be able to do that, but it's worth it to get that anyway.

At the same time, we have Wine making great progress and able to run a whole bunch of useful Windows apps without even needing any virtualization, so Linux is soon going to assimilate everything!

Re:Simple Q: will this run Win XP as a guest? (1)

EvanED (569694) | more than 7 years ago | (#17535848)

I know I can do something like this with VMWare right now, but if it's built in to the kernel that would be even better.

Better why?

Keeping in mind that they have an active interest in promoting this view, a VMWare paper [vmware.com] states that their software is substantially faster (we're talking an order of magnitude less overhead in some microbenchmarks) than hardware VM.

Re:Simple Q: will this run Win XP as a guest? (5, Informative)

eno2001 (527078) | more than 7 years ago | (#17535976)

My experience so far...

After playing around with paravirtualization with Xen for the past two+ years, I finally got the cash in August to buy a cheapo AMD dual-core 64-bit system (~$800 at Best Buy: an HP system with a 4200 and 2 gigs of DDR2 RAM). I've run both Xen and QEMU on it under 64-bit Gentoo Linux. The performance of Windows XP on Xen vs. QEMU is fairly close. I would have to say that it seems to me that where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU. I suspect this wouldn't be the case on better hardware with a high performance SCSI/RAID system. That should, at least, make things a bit better anyway. But for the time being I'm sticking with Xen since it's just too easy to use. And I am especially interested in the live migration features. As long as you have centralized disk storage, you can move live VMs between physical hosts with less than a second of interruption (ie. your users will never notice). Keep in mind, I'm doing this all at home as I'd really like to collapse many of my machines into one or two boxes and keep everything else as simple X displays where GUIs are needed. I've currently got four VMs running on the box with two of them being fully virtualized (Windows XP SP2 for access DRMed crap and Redhat Linux 7 which still hosts some services I don't want to part with) and the other two being paravirtualized (Domain0 which is just the VM management environment and my Gentoo Asterisk "PBX"). PAravirtualized performance is damn amazing. I think if I used strictly paravirtualized OSes I could probably squeeze out 20 VMs from this guy with decent performance. I actually just added two more gigs to the system tonight, and if I assume 128 megs per virtual machine (I've allocated 512M to the Windows XP VM) I can get up to 32 VMs running simultaneously.

As far as KVM goes, I've had a good deal of experience with QEMU and it KVM is similar, there are some limitations I hope they will overcome. (For what it's worth, the hardware based virtualization in Xen is also a modified QEMU process called qemu-dm) The main one being PCI device allocation. Xen allows you to partition your PCI devices and assign individual cards to specific VMs. I don't think QEMU does this, and I expect that KVM doesn't either.

Re:Simple Q: will this run Win XP as a guest? (1)

moco (222985) | more than 7 years ago | (#17536410)

I would have to say that it seems to me that where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU. I suspect this wouldn't be the case on better hardware with a high performance SCSI/RAID system. That should, at least, make things a bit better anyway.
I have not used Xen yet but I think vmware has problems with the same thing. The reason behind this problem is the "virtual disk", viewed from the host to guest you have: host raw disk->host FS->virtual disk file->guest FS. The solution to that problem is to use native partitions for your guest OS (especially recommended when running databases or other disk I/O intensive apps). Can Xen work with native partitions instead of virtual disk files? If so, does your performance improve?

Re:Simple Q: will this run Win XP as a guest? (1)

buchanmilne (258619) | more than 7 years ago | (#17537084)

Can Xen work with native partitions instead of virtual disk files?
Yes

Re:Simple Q: will this run Win XP as a guest? (1)

GiMP (10923) | more than 7 years ago | (#17538520)

> If so, does your performance improve?

Yes, it improves significantly when using a native partition. I use Xen in the enterprise, using software raid + LVM to create partitions for Xen. There are also users on the Xen lists reporting success combining SANs, software raid, and LVM for high availability.

Re:Simple Q: will this run Win XP as a guest? (0)

Anonymous Coward | more than 7 years ago | (#17536858)

I guess the other part of my problem is lack of documentation or understanding of what to do. I have a Suse 10.1 system here. I have some Windows XP disks. Now what? Do I need to buy anything else to get it working? How do I install all this and make it work?

Re:Simple Q: will this run Win XP as a guest? (1)

mnemotronic (586021) | more than 7 years ago | (#17538708)

...where Xen suffers is disk I/O. Anything that's disk intensive seems to eat up the CPU.

I don't understand why that would be. A disk is slow - glacial - by processor standards. The disk I/O subsystem should submit a request to the disk, then free up the kernel/system to go off and do other things. "other things" may eventually become "wait around for the disk subsystem", but I thought that would show up as idle time.

Acronym overload (4, Insightful)

phoebe (196531) | more than 7 years ago | (#17535788)

Couldn't they just try to use a different acronym, how about KbVM?

Re:Acronym overload (2, Funny)

PacketShaper (917017) | more than 7 years ago | (#17535868)

Let me get this straight... your solution to "acronym overload" is to *add* a character.

It's opposite day again, isn't it?

Re:Acronym overload (4, Informative)

X0563511 (793323) | more than 7 years ago | (#17535904)

It's not really a problem when you have lots of letters in an acronym. It's more of a problem when you have at least three different things in the same industry with the same acronym. [wikipedia.org]

Re:Acronym overload (1)

darkjedi521 (744526) | more than 7 years ago | (#17536040)

You missed libkvm found in many BSD releases. Kernel Virtual Memory interface.

Re:Acronym overload (0, Offtopic)

doti (966971) | more than 7 years ago | (#17536588)

So why don't you add it [wikipedia.org] to Wikipedia, instead of posting it here?

Re:Acronym overload (2, Informative)

dunstan (97493) | more than 7 years ago | (#17537058)

Strictly it's not an acronym unless it is commonly pronounced as a word.

NATO is an acronym, KVM isn't.

Re:Acronym overload (1)

aug24 (38229) | more than 7 years ago | (#17537182)

+1 Insightful? Were you aiming for +1 funny (Key b oard Video Mouse) and the mods just didn't get it?!

J.

Wrong audience for this article (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17535808)

These days, all of the people who would really know anything beyond the basics about virtualisation have either left for digg or other tech sites. All you're going to get is a few sad 'first post' trolls and a couple of annoying threads that will be so vapid they'd make the yahoo message boards look profound in comparision.

The tech world -meaning the people who would know anything about technology beyond running loonix in thier mom's basement- has left slashdot behind.

Isn't it time you did too?

This public service announcement has been brought to you by anti-slash.org, the GNAA, Hal Turner and the generous donations of readers like YOU.

mirror (0, Informative)

Anonymous Coward | more than 7 years ago | (#17535838)

already slashdotted.



For only being a release candidate the Linux 2.6.20 kernel has already generated quite a bit of attention. On top of adding asynchronous SCSI scanning, multi-threaded USB probing, and many driver updates, the Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Kernel-based Virtual Machine (or KVM for short) is a GPL software project that has been developed and sponsored by Qumranet. In this article we are offering a brief overview of the Kernel-based Virtual Machine for Linux as well as offering up in-house performance numbers as we compare KVM to other virtualization solutions such as QEMU Accelerator and Xen.

What has been merged into the Linux 2.6.20 kernel is the device driver for managing the virtualization hardware. The other component that comprises KVM is the user-space program, which is a modified version of QEMU. Kernel-based Virtual Machine for Linux uses Intel Virtualization Technology (VT) and AMD Secure Virtual Machine (SVM/AMD-V) for hardware virtualization support. With that said, one of the presented hardware requirements to use KVM is an x86 processor with either of these technologies. The respective technologies are present in the Intel Core series and later, Xeon 5000 series and later, Xeon LV series, and AMD's Socket F and AM2 processors.

The Kernel-based Virtual Machine also assigns every virtual machine as a regular Linux process handled by the Linux scheduler by adding a guest mode execution. With the virtual machine being a standard Linux process, all standard process management tools can be used. The KVM kernel component is embedded into Linux 2.6.20-rc1 kernels and newer, but the KVM module can be built on older kernels (2.6.16 to 2.6.19) as well. At this stage, KVM supports Intel hosts, AMD hosts, Linux guests (x86 and x86_64), Windows guests (x86), SMP hosts, and non-live migration of guests. However, still being worked on is optimized MMU virtualization, live migration, and SMP guests. Microsoft Windows x64 does not work with KVM at this time.

Whether you are using a kernel with KVM built-in or loading it as a module, the process for setting up and running guest operating systems is quite easy. After setting up an image (qemu-img will work with KVM) and the KVM kernel component loaded, the modified version of QEMU can be used with the standard QEMU arguments to get you running.

The hardware requirements to use KVM is an x86/x86_64 processor with AMD or Intel virtualization extensions and at least one Gigabyte of system memory to allow for enough RAM for the guest operating system. For our purposes, we had used two dual-core Intel Xeon LV processors with the Linux 2.6.20-rc3 kernel, which was released on January 1, 2007. Below is the rundown of system components used.
Hardware Components
Processor: 2 x Intel Xeon LV Dual-Core 2.00GHz
Motherboard: Tyan Tiger i7520SD S5365
Memory: 2 x 512MB Mushkin ECC Reg DDR2-533
Graphics Card: NVIDIA GeForce FX5200 128MB PCI
Hard Drives: Western Digital 160GB SATA2
Optical Drives: Lite-On 16x DVD-ROM
Cooling: 2 x Dynatron Socket 479 HSFs
Case: SilverStone Lascala LC20
Power Supply: SilverStone Strider 560W
Software Components
CmdrTaco Penis: Very small and covered with herpes sores.
Operating System: Fedora Core 6

The benchmarks we had used for comparing the performance was Gzip compression, LAME compilation, LAME encoding, and RAMspeed. The virtualization environments we had used were QEMU 0.8.2 with the kqemu accelerator module, Xen 3.0.3, and finally KVM. We had also compared these virtualized environments against running Fedora Core 6 Zod without any form of virtualization. During the Xen 3.0.3 testing, we had used full virtualization and not para-virtualization. The image size was set to 10GB during the testing process. The operating system used throughout the entire testing process was Fedora Core 6 Zod.

Looking over the virtualization performance results, KVM was not the clear winner in all of the benchmarks. KVM had taken the lead during Gzip compression, but in the other four benchmarks it had stumbled behind Xen 3.0.3. However, both Xen with full virtualization and the Kernel-based Virtual Machine had performed in front of QEMU with the QEMU accelerator in our select benchmarks using dual Intel Xeon LV processors with Intel Virtualization Technology. The benefits of KVM are high performance, stable, no modifications of the guest operating system are necessary, and a great deal of other capabilities (e.g. using the Linux scheduler). Once the Linux 2.6.20 kernel is officially out the door we will proceed with a greater number of KVM benchmarks in various environments including looking at the hardware virtualization performance between AMD and Intel.

If you have tried out Linux KVM, be sure to share your results in the Phoronix Forums.

Performance Comparisons (1, Interesting)

EvanED (569694) | more than 7 years ago | (#17535878)

Why no comparison against VMWare or native?

(VMWare I can kind of see, if they were deliberately sticking to all free solutions, but no comparison to running on the host system? That's just bad reporting IMO.)

Mod me down! (2, Insightful)

EvanED (569694) | more than 7 years ago | (#17535894)

Okay, I read the charts wrong because I'm apparently an idiot. Native times are the first bar in each graph.

Though VMWare would still have been nice...

VMWare performs better - heres why (3, Interesting)

Anonymous Coward | more than 7 years ago | (#17536060)

VMWare will perform *much* better on any workload with heavy process thrashing, especially forking (such as the lame compilation or anything that does an autoconf configure and make). This is due to the Intel and AMD virtualization extensions not going far enough to handle unix style OS workloads well (hardware assisted MMU and/or TLB virtualization support is lacking). Context switching takes a heavy toll. Windows doesn't do it so much so it won't suffer as much.

Also, only AMD's SVM supports full-virtualization of x86_64. Intel doesn't implement that.

VMWare works by dynamically scanning/translating native x86 and x86_64 code for protected instructions before executing it so it does not need the hardware extensions to work. That also means vmware performs better by not using the new cpu features.

Re:Mod me down! (3, Interesting)

Bert64 (520050) | more than 7 years ago | (#17536866)

I heard that the vmware license specifically excludes rights to benchmark it, or at least to publish those benchmarks.

Re:Mod me down! (3, Informative)

WNight (23683) | more than 7 years ago | (#17538060)

There's no valid way to enforce post-sale contracts, EULAs aren't valid.

Re:Performance Comparisons (1)

ArsonSmith (13997) | more than 7 years ago | (#17535946)

Or existing hardware KVMs. I can switch between 8 machines on one KVM and can even chain them together if I need more.

Hah (1)

bruce_the_loon (856617) | more than 7 years ago | (#17536574)

Hah, I can handle 16 machines on one piece of hardware [dlink.com] .

Re:Performance Comparisons (1)

Curtman (556920) | more than 7 years ago | (#17536794)

Why no comparison against VMWare or native?

I read this [kerneltrap.org] the other day on Kerneltrap (with their new look - love it or hate it) which seems to say that paravirtualization support has been added to KVM. They have several very impressive benchmarks which include native (but not VMWare).

Re:Performance Comparisons (1)

tjcrowder (899845) | more than 7 years ago | (#17538148)

VMWare I can kind of see, if they were deliberately sticking to all free solutions...

VMWare Server [vmware.com] is free (as in beer). It's not open (free as in freedom), granted, but it's free.

Apples to Oranges (3, Interesting)

X0563511 (793323) | more than 7 years ago | (#17535880)

So... we can compare Xen and KVM to Qemu now? The next time nVidia updates their drivers we should benchmark them against MESA OpenGL...

Xen amd KVM utilize (require, if I remember correctly) support for virtualization-specific processor instructions. Qemu does not.

Re:Apples to Oranges (1)

goaty_the_flying_sho (861224) | more than 7 years ago | (#17536112)

Yeah, well it seems the initial KVM test is a modified version of QEMU.

How do you like them apples?

Re:Apples to Oranges (3, Interesting)

Bottlemaster (449635) | more than 7 years ago | (#17536380)

Xen amd KVM utilize (require, if I remember correctly) support for virtualization-specific processor instructions. Qemu does not.
Xen (and surely KVM) hardly require virtualization ISA extensions. In fact, Xen was around (and incredibly useful) before these extensions even existed. As far as I know, the advantage of using Xen on a processor with virtualization extensions is that one can run an un-modified guest OS under Xen. Like Windows. Many open source operating systems have had Xen-specific support for a while. Thanks to these extra instructions, Windows has "caught up".

Re:Apples to Oranges (1)

k8to (9046) | more than 7 years ago | (#17537800)

TFA is talking about full virtualization as opposed to paravirtualization. Xen does require virtualization ISA instructions to achieve this, as opposed to VMWare, which achieves it through much trickery. KVM is full-virtualization only, and only runs with these ISA instructions.

It was only a few pages of text, about 10 paragraphs.

Re:Apples to Oranges (4, Informative)

repvik (96666) | more than 7 years ago | (#17536526)

Xen requires a P6 or better at this time (available for ~5 years). They hope to add support to ARM and PPC at a later time. KVM, OTOH, depends on brand-spanking new CPUs with virtualization instructions. QEmu just requires some CPU-thingy.

Re:Apples to Oranges (1)

Jacek Poplawski (223457) | more than 7 years ago | (#17537016)

Wrong, kqemu does.

Re:Apples to Oranges (1)

timeOday (582209) | more than 7 years ago | (#17538824)

Even VMWare does not make use of the virtualization-specific processor instructions, because they claim [vmware.com] they don't help:
32-bit VT works, is not tuned, and won't be officially supported unless it can offer the same performance that users of 32-bit VMs expect. Which probably won't be for another generation or two of VT-like instructions.

At this point, 32-bit VT is about as useful as support for a 387 math coprocessor on a Pentium - in both cases, the overhead of the support wipes out the gains. 64-bit VT is necessary because Intel CPUs need that to run 64-bit guests (and it is tuned such that performance is similar to 64-bit non-VT); 32-bit VT just isn't necessary, unless you have a reason why it should be?

Why do you want 32-bit VT support? In what case is 32-bit VT desirable?

Not sure what their results for 64 bit are.

kqemu? (1)

advocate_one (662832) | more than 7 years ago | (#17535886)

is it no longer required to get full speed out of qemu then?

Re:kqemu? (3, Informative)

popeydotcom (114724) | more than 7 years ago | (#17536202)

A I understand it kvm makes use of the VT instructions present in modern CPUs to make QEMU nice and zippy. Older CPUs don't have those instructions so they would still "need" kqemu to make QEMU go full speed.

Re:kqemu? (1)

pembo13 (770295) | more than 7 years ago | (#17536584)

How old a CPU are you talking about? Better yet, got a link?

Re:kqemu? (2, Insightful)

popeydotcom (114724) | more than 7 years ago | (#17537006)

On Linux it's easy to tell if you have VT..

egrep '^flags.*(vmx|svm)' /proc/cpuinfo

if that returns anything you have VT, if it doesn't, you don't.

Here's what I get on my desktop (Intel Core 2 Duo).

alan@wopr:~$ egrep '^flags.*(vmx|svm)' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm

There is a list on the Wikipedia page (http://en.wikipedia.org/wiki/X86_virtualization) of supported chips.

Re:kqemu? (1)

advocate_one (662832) | more than 7 years ago | (#17537068)

lucky beggar... ;) my best chip is a 2.4GHz Celeron D... I thought it was the bee's knees when I got it and also found I could run that hacked OSX 86 on it... now I'm considering a full upgrade on my box... or else building a better one from scratch. Just as easy to build from scratch and pass the old one down to my grand-daughter to run Edubuntu on... she loves using that when she visits.

from about a month back ... (3, Informative)

Gopal.V (532678) | more than 7 years ago | (#17535960)

Does the dec 12th story [slashdot.org] make this one a dupe or was just early warning ?

Re:from about a month back ... (1)

Sax Maniac (88550) | more than 7 years ago | (#17539954)

Neither. The stories have now been virtualized.

Call me when... (2, Insightful)

hondamankev (1000186) | more than 7 years ago | (#17536014)

they can virtualize XP under linux, can have hardware graphics acceleration, and full dx9+ support.

Re:Call me when... (1)

October_30th (531777) | more than 7 years ago | (#17536406)

Why was this modded as troll?


The lack of hardware graphics acceleration and DX is a serious show-stopper.

Re:Call me when... (5, Funny)

nacturation (646836) | more than 7 years ago | (#17536690)

Why was this modded as troll?
He didn't provide a phone number.
 

Re:Call me when... (1)

Curtman (556920) | more than 7 years ago | (#17536878)

The lack of hardware graphics acceleration and DX is a serious show-stopper.

It depends what you are doing with it. WTF would I care about graphics acceleration in the data center? Or POS? Or anywhere except maybe some lab workstations with graphics heavy apps.

Re:Call me when... (2, Insightful)

rbanffy (584143) | more than 7 years ago | (#17538946)

He probably wants to run Linux for work and still be able to run GameOS in his/her spare time.

Re:Call me when... (0)

Anonymous Coward | more than 7 years ago | (#17539494)

That is not the reason why most people want virtualization.

Virtualizing servers (consolidation) is all the rage.

About Time (1)

JmarsKoder (35286) | more than 7 years ago | (#17536082)

Its about time, this was a long time comming. The next step is to build in some binary translation. Any volunteers.

Wow, u of v must have low standards. (0)

Anonymous Coward | more than 7 years ago | (#17539578)

You can seriously get into universities in the US without grasping the basics of 3rd grade english?

come -> coming
shame -> shaming
hum -> humming
dim -> dimming

Notice a pattern there? This isn't fucking complicated, my 8 year old manages just fine.

Multicomputing (1)

H3xx (662833) | more than 7 years ago | (#17536140)

I'm wondering what effect this will have on paralell computing / clustering.

Re:Multicomputing (1)

Fred_A (10934) | more than 7 years ago | (#17536270)

The main effect is that you can now run half a dozen different virtualizing technologies in parallel instead of just one or two.

Whether this will get you more babes is left as an exercise to the reader.

KVM, QEMU, and Qemudo (3, Interesting)

this great guy (922511) | more than 7 years ago | (#17536456)

This is likely to boost QEMU's popularity, the virtualizer accelerated by KVM. An interesting coïncidence is that I released the very first version of Qemudo [sourceforge.net] on Jan 4th while being totally unaware of the existence of KVM. Then three days later the KVM project released their first version too, and I read about it on this kerneltrap article [kerneltrap.org] .

I am thrilled at the idea of using KVM + QEMU + Qemudo together. To put it simply, and to quote my README, Qemudo is "a Web interface to QEMU offering a way for users to access and control multiple virtual machines running on one or more remote physical machines." Qemudo makes use of two important features in QEMU: native support of VNC, and copy-on-write disk images for instantaneous VM creation. If you are interested go check out the website (and download the tarball which contains more detailled doc). </shameless-plug>

paravirt KVM on the way (5, Informative)

ens0niq (883308) | more than 7 years ago | (#17536212)

> [T]he Linux 2.6.20 kernel will include a full virtualization (not para-virtualization) solution. Yep. But Molnár Ingo (yes, the hungarian kernel hacker) Ingo Molnar announced [kerneltrap.org] a new patch introducing paravirtualization support for KVM.

KVM name is misleading (1)

pwizard2 (920421) | more than 7 years ago | (#17536218)

I don't like the name... KVM makes it sound like it's part of KDE, when it is not. SVM (Sun virtual machine would be better, IMO)

Re:KVM name is misleading (4, Interesting)

Jessta (666101) | more than 7 years ago | (#17536326)

3 ? 00:00:00 ksoftirqd/0
        5 ? 00:00:00 khelper
        6 ? 00:00:00 kthread
        8 ? 00:00:00 kblockd/0
        9 ? 00:00:00 kacpid
    102 ? 00:00:00 kseriod
    105 ? 00:00:00 khubd
    176 ? 00:00:00 kswapd0
    784 ? 00:00:00 kpsmoused
    814 ? 00:00:00 khpsbpkt
    818 ? 00:00:00 knodemgrd_0

seems to fit in with the naming convention of all the kernel related processes.

Re:KVM name is misleading (0)

Anonymous Coward | more than 7 years ago | (#17537472)

> seems to fit in with the naming convention of all the kernel related processes.

No it doesn't, I don't have any physical devices with those names on my desk or in my server racks whereas KVM switches are commonplace.

Re:KVM name is misleading (0)

the_humeister (922869) | more than 7 years ago | (#17538526)

Hmmm... when did the KDE team take over kernel development?!?

Re:KVM name is misleading (1)

White Yeti (927387) | more than 7 years ago | (#17539196)

I agree, just because KVM means "keyboard, video, mouse" to me. As with many initializations, I see there are several possible meanings [wikipedia.org] .

YUO FAIL IT? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#17536256)

butts are expose[d ONE OR THE OTHER

mo3 down (-1, Troll)

Anonymous Coward | more than 7 years ago | (#17536778)

benchmarks (2, Interesting)

Jacek Poplawski (223457) | more than 7 years ago | (#17537000)

Benchmarks in the article shows that it is slower than XEN.
Do you know why?
Xen requires some support from virtualized operating system, what about KVM?

Re:benchmarks (1)

popeydotcom (114724) | more than 7 years ago | (#17537024)

No, kvm doesn't require the guest to be modded. I have virtual machines that I have been running for ages under qemu (both with the proprietary kqemu module and without). I just started running those same images with kvm, and they Just Work (TM).

Re:benchmarks (1)

vinsci (537958) | more than 7 years ago | (#17539592)

Benchmarks in the article shows that it is slower than XEN. Do you know why?
The test was done without the new KVM MMU optimizations that were included in Linux 2.6.20-rc4 (the tests in the article were done with Linux 2.6.20-rc3). The new optimizations gives almost 20 time speedup [gmane.org] for context switches, with further optimizations still possible.

Re:benchmarks (0)

Anonymous Coward | more than 7 years ago | (#17539766)

Xen requires changes in the virtualised operating system if you are running it on a processor without virtualisation instructions (VT, Pacifica), or if you want to bypass these instructions because they're not exactly quick (and AMD's Pacifica is meant to be far quicker than VT, but still slow). Otherwise you can run the operating system unmodified (e.g., Windows).

Important project (0)

Anonymous Coward | more than 7 years ago | (#17537116)

Kernel schould be good at caching, scheduling (and also excellent choice for access to host hardware - via QEMU layer of course). KVM is based on experience with Xen and paravirt ops, and because skilled hackers are working on it, I expect it to surpass or at least match systems like Xen or VMWare in performance and some features. In future, I believe special schedulers/ managers will be added to kernel for managing VM resources (once KVM matures enough).

Poor scientific practice (3, Informative)

piranha(jpl) (229201) | more than 7 years ago | (#17537304)

Why do they document the model of CD-ROM drive they used, but not the configuration of each emulation/simulation environment? I was shocked by the LAME compile times--and forced to wonder and guess what the filesystem configuration was. Is the filesystem located in an image file on the "host" computer's filesystem? Wouldn't it be interesting to try using a comparible medium across all benchmarks (shared NFS server, or low-level access to the same block device)?

Not enough data (CPU time vs. real time, etc.), not enough benchmarks (different filesystem media, etc.), poor documentation (configuration, anyone?), on what doesn't even amount to an official release. Correct me if I'm wrong.

It will be too little, too late (1)

Anonymous Coward | more than 7 years ago | (#17537392)

Its amazing but it seems to me as if every plan adopted by the kernel dev. team always falls into the description in the topic.. Right now I agree that the whole concept has evolved right into that which they tried to avoid in the early days. Namely: a massive and sluggish machinery which isn't open to sensible comments and outside input and basicly works under the rule: "We know best". Disclaimer: This is fully from my personal point of view. I'm not claiming to be right, but it sure as hell seems this way to me.

Why? Well, for starters, we have the combined kernel tree. Developing in the kernel; people can claim how this is working all they want but when looking at the results I see a completely different picture. Basicly an enormous overhead for anyone who is trying to maintain a kernel (maintenance is no longer done by the kernel dev. team but but bestowed onto others). A lot of people warned for this to happen but did agree that you wouldn't know untill you tried. Now we're nearing the point of no return; you can see that some people can no longer cope and as such start to either combine their work (fedora), planning to stop their consumer work (more overhead for free is basicly a loss for a company) or simply move onto other platforms. But still; no one is listening... Eventually they will, but I sure hope its not too late then.

Why do I think so? Because I also think that the kernel dev. team is very busy to try reinventing the wheel. Virtualisation in the kernel? I for one recall having played with User Mode Linux. Granted: it was sluggish but with the SKA patch started to run pretty decently. All it really needed was more native kernel support in a good way. But we all know what happened here: basicly a massive wall was formed (once again: in my experience) thus disallowing the original author to develop his baby to its full potential. Finally a small part actually made it into the kernel, but naturally it was broken and you STILL needed to patch it with a 3rd party patch to make it work to its full potential. And ofcourse the SKA patch without which this wouldn't run that well.

Wouldn't it have made more sense to actually spend more time to make Usermode linux more adaptive and allow it to be implemented into the kernel in a native way? A lot of users cried out for this but the kernel dev. team, in their wisdom or sheer arrogance, never bothered with this. And now, several years later, what do we see? Plans for implementing virtualisation in the kernel? Too little too late boys!

Right now I can see what this project (usermode linux) might have become when looking at Sun Solaris and its virtualisation support. Its pretty neat: basicly a Solaris environment running on top of the host in a shielded way. It does utilize the same kernel, but because of the RBAC model (and other ways of securing the OS) this doesn't have to be a security hazzard IF used correctly. Linux has SELinux which has enough potential to help with that, and when it comes to running Linux on Linux there is Usermode linux...

So pardon me while I laugh it out a little.. IMO this really is too little too late and they have only their own arrogance to thank for it. Meanwhile my attention has already been sucked into Solaris and OpenSolaris and I for one don't plan to look back anymore. Keep it up!

The real problem (1)

Amazing Quantum Man (458715) | more than 7 years ago | (#17539908)

The real problem is, of course, the braindead x86 ISA that won't support full self-virtualization without special "extensions".

The 68K family was fully virtualizable back in the late '80s (from the 68020 on).

The numbers are a little deceiving (1)

Anthony Liguori (820979) | more than 7 years ago | (#17539946)

2.6.20 will be the first real release of KVM. This benchmark used 2.6.20-rc3. For 2.6.20-rc4, a new shadow paging implementation was introduced (memory virtualization) that is significantly faster than what was present in -rc3. I've only got microbenchmarks handy, but context switch time, for instance, improved by about 300%.

I suspect if they reran their benchmarks with -rc4, the KVM numbers would be much more competitive with the Xen numbers (although I do suspect Xen will still be on top--slightly).
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?