×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Hardware Virtualization Slower Than Software?

Hemos posted more than 7 years ago | from the the-jury-is-still-out dept.

197

Jim Buzbee writes "Those you keeping up with the latest virtualization techniques being offered by both Intel and AMD will be interested in a new white paper by VMWare that comes to the surprising conclusion that hardware-assisted x86 virtualization oftentimes fails to outperform software-assisted virtualization. My reading of the paper says that this counterintuitive result is often due to the fact that hardware-assisted virtualization relies on expensive traps to catch privileged instructions while software-assisted virtualization uses inexpensive software substitutions. One example given is compilation of a Linux kernel under a virtualized Linux OS. Native wall-clock time: 265 seconds. Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem. "

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

197 comments

Sponsored by VMWare.. what do you expect? (5, Insightful)

thegrassyknowl (762218) | more than 7 years ago | (#15897444)

See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.

Re:Sponsored by VMWare.. what do you expect? (3, Insightful)

cp.tar (871488) | more than 7 years ago | (#15897452)

Even so, they may be at least partially right.

Besides, if a hybrid approach is necessary, VMWare will need to adjust as well. Or am I missing something?

Re:Sponsored by VMWare.. what do you expect? (4, Informative)

mnmn (145599) | more than 7 years ago | (#15897487)

If you search back on Vmware vs Xensource, you'll see Vmware is doing everything to discredit Xen and hardware hypervisors. Instead of saying 'it doesnt work' its more effective to say it works, we have it too, it fails on its own so it needs our software too. From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions. You do need a software mini-OS as the root on top of which you'd install the OSes which is better than using Windows as the root OS.

But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?

Re:Sponsored by VMWare.. what do you expect? (0)

Anonymous Coward | more than 7 years ago | (#15897582)

Where have I seen this before?

Hmmmmmmmmmmmmmmmmmmmmmm.... Unix vs. Linux?

Re:Sponsored by VMWare.. what do you expect? (4, Informative)

julesh (229690) | more than 7 years ago | (#15897673)

From everything I've read about hypervisors including the Power CPU hypervisors from IBM (which have been functional for years) and the original Cambridge paper that created Xen, Hypervisors really outperform software solutions.

Note that Xen's original hypervisor implementation *is* a software solution -- it relies on rewriting the guest operating system kernel so that the kind of hardware traps that VMware are talking about here are unnecessary. Note that it worked flawlessly before the virtualisation technology (eg. Intel VT) that VMware is testing was avialable.

This HAS happened before - with Stacker (2, Informative)

tomhudson (43916) | more than 7 years ago | (#15897986)

This won't be the first time software beats hardware.

The original Stacker product was a combination of a hardware card and software. Think of the hardware card as an accelerator for doing the comression/decompression.

The hardware was faster on the oldest machines, but on anything above a 286/12 (I had a 286/20 at the time), or almost any 386, it ran faster without the hardware card. And on every 486, the card was useless.

So, while you may want to "consider the source" of this news, this is only one factor to weigh. As time goes on, I'm sure we'll see more studies, benchmarks, etc.

Remember, there are 3 things that are inevitable in a programmers' life - death, taxes, and benchmarks.

Re:Sponsored by VMWare.. what do you expect? (1)

pe1chl (90186) | more than 7 years ago | (#15897683)

Where have I seen this before?

Citrix?
Not an open source product and not lost it to an open source product, but they made a product that has been largely made superflouos because MS built it right into the OS.

Re:Sponsored by VMWare.. what do you expect? (2, Interesting)

XMLsucks (993781) | more than 7 years ago | (#15897695)

Where have you seen VMware discrediting XenSource? I haven't seen that. Can you back this up with some links? Searching for "VMware vs Xensource" was fruitless for me. And searching for "VMware discredits XenSource" was also fruitless.
But Vmware's agitation is understandable. They're about to lose it all to an open source project. Where have I seen this before?
I'll let you in on a secret: if you consider all costs, and return on investment, using VMware is a competitive advantage over using Xen. But I don't care whether you believe me, because if you don't, you'll be at a competitive disadvantage, which is to my benefit.

Re:Sponsored by VMWare.. what do you expect? (0)

Anonymous Coward | more than 7 years ago | (#15897827)

I'll let you in on a secret: if you consider all costs, and return on investment, using VMware is a competitive advantage over using Xen. But I don't care whether you believe me, because if you don't, you'll be at a competitive disadvantage, which is to my benefit.

Okay, maybe you two have a history I don't know about, but you'd need to know a hell of a lot about his business and the systems he's using in order to be able to make even a guess at the relevant costs and potential return. It seems likely that you're spewing crap.

Oh please ... (0)

Anonymous Coward | more than 7 years ago | (#15898104)

I'll let you in on a secret: if you consider all costs, and return on investment


The instant that somebody starts to mention ROI or TCO, you can be certain that they have no actual facts on their side.

Re:Sponsored by VMWare.. what do you expect? (2, Interesting)

TrisexualPuppy (976893) | more than 7 years ago | (#15897954)

It's not that people don't look to old mainframe solutions for things, they do, it's that often what was feasable on those wasn't on normal hardware, until receantly. There was no reason for chip makers to waste silicon on virtualization hardware on desktops until fairly receantly, there just wasn't a big desktop virtualization market. Computers are finally powerful to the point that it's worth doing.

It's no supprise that large, extremely expensive computers get technology before home computers do. You give me $20 million to build something with, I can make it do a lot. You give me $2000, it's going to have to be scaled way back, even with economies of scale.

You see the same thing with 3D graphics. Most, perhaps even all, the features that come to 3D cards were done on high end visualizaiton systems first. It's not that the 3D companies didn't think of them, it's that they couldn't do it. The orignal Voodoo card wasn't amazing in that it did 3D, it was much more limited than other thigns on the market. It was amazing in that it did it at a price you could afford for a home system. 3dfx would have loved to have a hardware T&L engine, AA features, procedural textures, etc, there just wasn't the silicon budget for it. It's only with more developments that this kind of thing has become feasable.

So I really doubt Intel didn't do something like VT because they thought IBM was wrong on the 360, I think rather they didn't do it because it wasn't feasable or marketable on desktop chips.

Re:Sponsored by VMWare.. what do you expect? (0)

Anonymous Coward | more than 7 years ago | (#15898156)

The original cambridge paper was a crock.

They compared Xen "Domain-O" vs. vmware workstation. that's effectively comparing vmware's host against vmware's workstation.

i.e. Dom-O has relatively (i.e. replace hardware interrupts w/ Xen's event mechanism) access to the hardware. On the other hand, Dom-U's and vmware virtual machines don't have direct access and are much much slower because of it.

But that's not why the paper was a crock. The paper was a crock because they described all the things they did to make the Dom-U's "Fast", but never tested them, showed us other numbers.

To most people in the systems research community, Xen isn't special at all from a research perspective. The only thing that's special about is that its OSS which a) gives it mindshare and b) lets people do other interesting hypervisor research. (and I'm not discounting the value of "b", that's a pretty good contribution in and of itself)

Re:Sponsored by VMWare.. what do you expect? (3, Insightful)

zerogeewhiz (73483) | more than 7 years ago | (#15897453)

Haven't read it, but I wonder if they were using VT/Pacifica chipsets or no...

It's like Apple's claim that their Intel jobbies are 5x faster - a bit silly and very, very specific...

And yes, VMWare are hardly likely to mention that Xen-style virtualisation is going to be better now, are they?

Re:Sponsored by VMWare.. what do you expect? (5, Insightful)

XMLsucks (993781) | more than 7 years ago | (#15897496)

VMware sells both hardware-accelerated and software virtualization products. They implemented full support for VT (how else would they benchmark it? Plus they were the first to support VT). If you run VMware on 64-bit Windows, then you use VMware's VT product. But because VMware's original software method is faster than the VT method on 32-bit, they continue to use the software approach.

VMware's paper is a typical research paper, published at a peer-reviewed conference. This means that they have used the scientific method. The chances are 99.9999% that you will easily reproduce their results, even if changing the benchmarks.

I, on the other hand, am smart enough to see that they are stating the obvious. If you read the Intel VT spec, you'll see that Intel does nothing for page table virtualization, nor anything for device virtualization. Both are extremely expensive, and besides sti/cli, are the prime candidates for hardware assists. Intel will likely solve this performance issue in future revs, but right now, VT isn't fast enough.

Hmmm, virtualisation? Do you happen to work on Xen?

 

Re:Sponsored by VMWare.. what do you expect? (1)

geniusj (140174) | more than 7 years ago | (#15897627)

I see you mentioned most of what I was 'clarifying' in my post. Sorry, I didn't read the whole thing ;-). So about the only new info is that VMware runs both 32 and 64-bit AMD VMs in software mode.

Re:Sponsored by VMWare.. what do you expect? (2, Insightful)

andreyw (798182) | more than 7 years ago | (#15897769)

If VMWare's solution still needs a host OS (I remember them using stripped-down Linux for their server offering), then no... they might use use a subset of VT, but its not a true hypervisor.

And by the way... yes... device virtualization is still not there, but your page tables claim is bullshit. If you read the VT (and the SVM) docs, you would realize that you can implement shadow page tables RIGHT NOW. The hardware assists are there.

Re:Sponsored by VMWare.. what do you expect? (1, Redundant)

XMLsucks (993781) | more than 7 years ago | (#15897849)

What are you talking about in regards to a true hypervisor? You don't need a true hypervisor to use VT. The Linux kernel could use VT to run VMs (Xen is then completely unnecessary for open source virtualization).

And by the way... yes... device virtualization is still not there, but your page tables claim is bullshit. If you read the VT (and the SVM) docs, you would realize that you can implement shadow page tables RIGHT NOW. The hardware assists are there.

Of course VT supports shadow page tables; how else could it virtualize? The problem is that it isn't accelerated. The guest OS needs to translate its virtual addresses to physical, like so: v --> p. The hypervisor needs to translate the guest's physical pages to machine pages, like so: p --> h. The TLB needs the final translation of: v --> h. VT offers no acceleration to promote v --> p --> h into the TLB. Currently, the hypervisor must maintain a shadow page table with v --> h, which the hardware automatically adds to the TLB. But the hypervisor must manually perform the translation of v --> p --> h, to add to the shadow page table. That is slow. Future revs of VT will automatically do the v --> p --> h. If you believe that happens now, then show me what I misunderstand, or implement support for it and disprove VMware's performance paper.

Re:Sponsored by VMWare.. what do you expect? (0)

Anonymous Coward | more than 7 years ago | (#15897498)

They're in the x86 space. At the moment, there is basically no option for hardware virtualisation.

Also, you forget where VMWare is actually targeted. They're giving the basic virtualisation away for free for goodness sake! Where they're making the money is the managment of virtualisation - things like VMotion that can move a VM from one physical machine to another, live, without missing a beat. This sort of thing doesn't matter if it's hardware or software virtualisation underneath.

Re:Sponsored by VMWare.. what do you expect? (5, Informative)

Anonymous Coward | more than 7 years ago | (#15897513)

See title... VMWare make software virtualisation products. Of course they're going to try and find that software methods are better.

Disclaimer: I work for VMware.

  1. VMware already supports VT, but it's not enabled by default because for normal workloads it's slower. If VT really were faster, do you really think we'd be choosing to use a slower approach and making customers unhappy?
  2. Even Intel admits the first generation of VT hardware wasn't so great and now claims that they were aiming for correctness instead of performance:

Re:Sponsored by VMWare.. what do you expect? (1)

dmiller (581) | more than 7 years ago | (#15897885)

Is AMD's Pacifica virtualisation system any better?

Yes, AMD Pacifica seems to be far better (3, Interesting)

Morgaine (4316) | more than 7 years ago | (#15898072)

Is AMD's Pacifica virtualisation system any better?

Apparently, yes, and by a good margin.

There are several documents and articles out there which point out VT's problems and how Pacifica is quite dramatically better. Here's an excerpt from "AMD Pacifica turns the nested tables" [theinq.net], part 3 of an informative series of articles:

  • The basic architecture of the K8 gives AMD more toys to play with, the memory controller and directly connected devices. AMD can virtualise both of these items directly while Intel has to do so indirectly if it can do so at all.

    This should allow an otherwise identical VMM to do more things in hardware and have lower overhead than VT. AMD appears to have used the added capability wisely, giving them a faster and as far as memory goes, more secure virtualisation platform."

So, it looks like AMD are ahead on hardware virtualization at the moment.

If I read it correctly, this is because Intel's VT actually requires a lot of software intervention, so it's not actually a very strong hardware solution at all.

Re:Sponsored by VMWare.. what do you expect? (1)

sco08y (615665) | more than 7 years ago | (#15898113)

now claims that they were aiming for correctness instead of performance

But hey, let's hear it for correctness!

Re:Sponsored by VMWare.. what do you expect? (4, Informative)

arivanov (12034) | more than 7 years ago | (#15897531)

While they offer software virtualisation products, they are also interested in these products having hardware assistance. The AMD and Intel specs were designed with input from them (amidst other vendors).

As far as the results there is nothing surprising here. This has happened before. Fault driven emulation of 80287 was nearly 50%+ slower than compiled in emulation. There were quite a few other examples x86 which all revolve around the fact that the x86 fault handling in protected mode is hideously slow. Last time I have had a look at it in asm was in the 386 days and the numbers were in the 300 clock cycle range for most faults (assuming no wait on memory accesses). While 486 and Pentium improved the things a bit in a few places, the overall order remains the same (or even worse, due to memory waits). Anything that relies on faults in x86 is bound to be hideously slow.

Not that this matters, as none of the VM technologies is particularly caring about resources. They are deployed because there is an excess resource in the first place.

Re:Sponsored by VMWare.. what do you expect? (1)

Alcoholic Synonymous (990318) | more than 7 years ago | (#15897587)

Additionally (addressing the TFA hybrid comment), since most of these systems using virtualization are x86 based or compatiable, a significant portion of the pocesses for both hard and soft emulation are running straight through the CPU. The only difference being these "traps" for protected operations. Once the kinks are worked out of the hardware version, the software will be history. VMWare is doing a bit of preemptive FUDing before they are forced under. "Inexpensive software cycles" in this case costs actual CPU cycles thus degreding performance overall. In the case of hardware, it's a matter of brining the chip's trap+process mechanism to par with the current CPU speeds, at wich point it will be relatively lossless and potentially faster.

Read the pdf again. (1)

sharper56 (142142) | more than 7 years ago | (#15897809)

The biggest WIN performance-wise VMWARE is getting is by stripping expensive TRAPS/FAULT and replacing them with appropriate non-faulting instructions, due the the software VMM's JIT compiling nature. It's the same feature that allows some Java code to whip pure C, because the VM is by it's nature dynamic and optimizes live for certain cases better that static analysis can do.

This type of win will not go away with better HW virtualization and offers VMWare at better claim at building a more secure virtual environment as they can logically peer into the code a strip dubious stuff right out.

Re:Read the pdf again. (1)

Alcoholic Synonymous (990318) | more than 7 years ago | (#15897872)

What VMWare is doing is still trapping/faulting, only at the expense of CPU cycles before processing the instruction. Technically, this is called overhead, and hardware can (and will) make this negligable in time. The software will always have to check/replace the instruction before it can process it, and this will always add overhead to the process. Hardware will catch up to this by making the replacement more or less "in line" and thus negligable. No pre-processing overhead before sending the modified instruction to the hardware. It will either process the dangerous instruction in itself (sandboxing) or rewrite the process on the fly.

As far as more secure goes, you can't get any more secure than hardware simply refusing to process the instruction. Sorry. House of Cards vs House of Brick. However, *both* have the potential to be exploitable and insecure, so VMWare is simply FUDing there as well.

Your comparrison of C to Java is off base by miles as well. We're not talking about the optimization of compiled code, we are talking about processes running the same code through dedicated hardware or software emulation. In which case, whichever has the faster processor behind it will probably win depending on the overheads of each method.

Let me state this in much simpler terms. Software wins today, hardware will win tomorrow.

This is the cycle of new tech threatening old tech. Nothing more. And VMWare is in a dangerouse position of facing its products functionality being packaged with every major OS's default install. It's time for them to make thier money while they can, and attacking the newcomers while they are still in thier infancy and playing catchup is thier only recourse.

Re:Read the pdf again. (1)

sharper56 (142142) | more than 7 years ago | (#15898175)

As stated/shown in the PDF, VMWARE implements a binary translation (BT) that reads sections of binary code, creates "translated" code removing problematic op-codes/accesses and alters reads/writes to VMM emulated structures instead of directly accessing HW. While this step absolutely slows does the execution of code the first time through a path, subsequent runs funnel through previously BT'd code. Since the translated code lacks faulting instruction and other items that need emulation, those runs are much faster in the software VMM. The PDF states faulting code inside the trap-and-emulate mechanism takes 2300+ cycles while paths through the BT take 200 cycles. The x86s speed through faults is the prime culprit here and has been a known problem for many years, I doubt that it will be magically fixed because VMM necessities. So, while not a one-to-one match up with JAVA JIT, it well follows the idea that a more dynamic analysis of the code (in this case the need to take faults vs emulating those faults while running in a VMM) can allow great speed improvements.

Similarly, the software VMM can add code to enhance the security of the emulated systems by analyising access patterns, verifying function returns and memory and code, and replicating all that information to services off the current machine and outside of the normal perview of OS.

Don't fall for the "hardware wins tommorrow" crap of the hardware engineers. Moore's law is a set of guidelines developed at the early phase of the computing revolution, not an absolute law of nature. I've often seen hardware running simple O^n alogorhythims replaced with CPU based processes running O(n) ones faster. It's fun to watch HW engineers walk away angry and muttering.
   

Re:Sponsored by VMWare.. what do you expect? (0)

Anonymous Coward | more than 7 years ago | (#15897623)

TFA does admit that it is not making an apples-to-apples comparison, but all their graphs show performance comparisons between physical, softwarevm and hardwarevm, they avoid mention of the name xen (apart from in one reference) but it would have provided further insights if they'd added paravirtual performance numbers too ...

Re:Sponsored by VMWare.. what do you expect? (1)

tgd (2822) | more than 7 years ago | (#15897709)

More importantly they sell virtualization products that do not support VT, and their primary competitor does.

Re:Sponsored by VMWare.. what do you expect? (1, Informative)

Anonymous Coward | more than 7 years ago | (#15897766)

VMware fully support VT, but they don't enable it by default.

Measuring the wrong stuff (1)

CarpetShark (865376) | more than 7 years ago | (#15897935)

Their measurements may be accurate. The question for me is.. what are they measuring? The slowest things about virtualisation for me are: a) swapping and memory use, because I tend to want LOTS of virtualisation, or none; b) peripheral hardware sharing issues, such as 3D video card acceleration; c) handling many users or workloads, so that each doesn't slow the other to a crawl.

If hardware solutions can do a better job of compressing the memory that's not in use (unlikely) or virtualising 3D video, so that many OSes can run in a window with mixed open source drivers and proprietary drivers, and perform well, then I'm interested. If it can stop users on a shared hosting machine from bothering each other or getting a terrible responsivenesss experience when they ssh in and run some graphical app remotely, then I'm interested in hardware solutions. Otherwise, it's same-old same-old I guess.

Don't talk if you don't actually know (1)

suso (153703) | more than 7 years ago | (#15898057)

I just found out the hard way that Xen isn't quite ready to do hardware virtualization either. It does support the VT intruction set, but it doesn't handle disk IO well at all to the point where you can get up to 50% performance loss. They say that this will be eventually fixed but that doesn't change the fact that I spent time looking for the right hardware virtualization solution and it still doesn't perform. Software paravirtualization under Xen is probably still better than VMware though.

So don't be so quite to judge VMware's claims just because they are a for profit company with an insane EULA.

Justifying their model? (-1, Redundant)

bobintetley (643462) | more than 7 years ago | (#15897446)

Refresh my memory, what model do VMWare use? Oh, that's right! Software assisted virtualisation.

And it's better they say? I'm shocked.

Bias? (0, Redundant)

ikejam (821818) | more than 7 years ago | (#15897450)

I'd like to think of VMware in a different mould than MS, but i'd still hate to take this info in w/o some third party verification.

Re:Bias? (4, Insightful)

RegularFry (137639) | more than 7 years ago | (#15897502)

Insisting on third-party verification of results is hardly damning either of them... It's just scientific. You (and everyone else) are absolutely right to be sceptical, and not just because VMware have a vested interest in this case. They might just be wrong. Or not.

Nevermind who sponsored the study... (1)

Cherita Chen (936355) | more than 7 years ago | (#15897454)

The real question is what type of test was performed... It would make sense that different applications would function differently in a variety contexts. How about some variance? I dig VMWare, but come on...

Re:Nevermind who sponsored the study... (1)

julesh (229690) | more than 7 years ago | (#15897697)

So why don't you actually read the paper? It has quite a good explanation of what they did. FWIW, it wasn't a clear win for software; there were things the hardware implementation did better, but they're things that don't seem to be quite so important for real-world applications.

Hybrid? Good + Bad = Better? (2, Insightful)

MrFlannel (762587) | more than 7 years ago | (#15897461)

Software-assisted virtualization: 393 seconds. Hardware-assisted virtualization: 484 seconds. Ouch. It sounds to me like a hybrid approach may be the best answer to the virtualization problem.
So, um, a hybrid approach is better because it will take 439* seconds? Why?

* - I imagine in real life it's not a 1:1 ratio, but for the sake of argument, work with me.

Re:Hybrid? Good + Bad = Better? (2, Insightful)

cp.tar (871488) | more than 7 years ago | (#15897478)

I suppose there are certain things hardware virtualisation does better.

The trick is, I'd guess, to find out which works better in which circumstances.

You see that people suspect this white paper because of its origin; they are right in doing so at least because only one type of test has been performed; surely not all computing tasks perform the same way as a kernel compile.
This suggests that VMWare have found the example which supports their claims the best; the question is, of course, whether this is the only such example.

So if we suppose that there are certain types of problems where hardware virtualisation outperforms software virtualisation, hybrid solutions seem to be the right way to go.

P.S. I don't really know what I'm talking about...

Re:Hybrid? Good + Bad = Better? (1)

42forty-two42 (532340) | more than 7 years ago | (#15897893)

They did a lot more than a kernel compile. But I suppose I shouldn't expect people on slashdot to read the article anyway.

CAPTCHA: pitying, how appropriate.

Re:Hybrid? Good + Bad = Better? (1)

Solder Fumes (797270) | more than 7 years ago | (#15897499)

It probably won't work that way at all. This could be more of an additive thing.

For example, say you have a boat powered by 393 horsepower engine and a 484 horsepower engine. If you run them both at the same time, the net power is not going to be 439hp.

Software+hardware won't add in nearly the same way, but I wouldn't be surprised if a hybrid approach was %50 faster than either method alone.

Re:Hybrid? Good + Bad = Better? (1)

The Mysterious X (903554) | more than 7 years ago | (#15897537)

Thats true, but if you had a boat that was powered by 2 engines with specialised propellors, one for choppy water, one for smooth water then the maths changes completely.

If they can pick the *best* features of hardware, and the *best* features of hardware VT, then it is possible to create something that is faster than both solution on their own.

Re:Hybrid? Good + Bad = Better? (2, Insightful)

julesh (229690) | more than 7 years ago | (#15897687)

Because if you actually RTFA it shows that the hardware virtualization is faster for some benchmarks (e.g. processing system calls) and slower for others (e.g. performing I/O requests or page-table modifications); if you combine the best features of each you should be able to get a virtual machine that is faster than both.

The correct conclusion is more limited (5, Insightful)

njdj (458173) | more than 7 years ago | (#15897469)

The correct conclusion is not that virtualization is better done entirely in software, but that current hardware assists to virtualization are badly designed. As the complete article points out, the hardware features need to be designed to support the software - not in isolation.

It reminds me of an influential paper in the RISC/CISC debate, about 20 years ago. Somebody wrote a C compiler for the VAX that output only a RISC-like subset of the VAX instruction set. The generated code ran faster than the output of the standard VAX compiler, which used the whole (CISC) VAX instruction set. The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

The similarity of the two situations is that it's a mistake to draw a general conclusion about the relative merits of two technologies, based on just one example of each. You have to consider the quality of the implementations - how the technology has been used.

Re:The correct conclusion is more limited (1)

badfish99 (826052) | more than 7 years ago | (#15897578)

The Intel processor design has been a pile of manure ever since the first 8086. On the other hand, the IBM zSeries range of computers has been doing virtualization since the 1960s, and presumably the hardware has been designed to get it right. Can anyone give comparable performance figures for programs running in a virtual machine or the bare metal for a zSeries machine?

Re:The correct conclusion is more limited (1)

keesh (202812) | more than 7 years ago | (#15897638)

You can't run on bare metal on zSeries. The whole architecture just isn't designed to work that way. It has to have the virtualisation layer, or at least something that provides nearly all of the functionality of what would traditionally be considered a virtualisation layer.

Re:The correct conclusion is more limited (1)

Covener (32114) | more than 7 years ago | (#15897838)

You can't run on bare metal on zSeries. The whole architecture just isn't designed to work that way. It has to have the virtualisation layer, or at least something that provides nearly all of the functionality of what would traditionally be considered a virtualisation layer.


Linux can run on the bare metal (the only OS on the entire system), as a first-level image in an LPAR (the LPAR is actually managed by a lightweight hypervisor), and on top of of z/VM (itself on top of the bare metal or an LPAR) which is the more heavyweight hypervisor that has the sexier resource mgmt / guest OS mgmt.

Re:The correct conclusion is more limited (1, Interesting)

Anonymous Coward | more than 7 years ago | (#15897879)

Disclaimer: I work for IBM (and am hence posting AC since I don't dare be seen as speaking for the company, which I do not).

Grandparent post is correct. As of the z990 series/model (I think), LPAR (lowest level of virtualization) is required. There is no further option for 'bare metal' operation. IOW, what used to be 'bare metal' is now a single LPAR; the difference being that the hypervisor is always engaged. But this is a fairly recent development, and I can't dis the parent post for not knowing.

As pertains to TFA and this subject overall, it took IBM many years to get hardware virtualization 'right', and it's under constant refinement, even now. There were incremental hardware (and OS, if we want to talk z/VM) improvements all the way across the zSeries (and predecessors) line to support it, dating all the way back to the 1970s.

I don't expect that Intel or AMD would have gotten it right on the first shot -- efficient h/w virtualization is not as easy as it sounds it might be. If the benchmarks are correct, I'm not too surprised. Getting there may take Intel/AMD many more years, depending on commercial (read: paying customers with $$$ on the table) demand for efficiency.

If I were on the VMWare or Xen staff, I wouldn't be losing any sleep -- at least for a while, yet.

Re:The correct conclusion is more limited (4, Interesting)

TheRaven64 (641858) | more than 7 years ago | (#15897712)

The easiest architecture to virtualise is the Alpha. It had a single privileged instruction, and all that did was shift to a higher privilege mode (which had a few shadow registers available) and then jump to an address in firmware. The firmware could be replaced by using one of these calls. If you wanted to virtualise it then you could do so trivially be replacing the firmware with something that would check or permute the arguments and then vector off into the original firmware.

It also had a few other advantages. Since you were adding virtual instructions, they all completed atomically (you can't pre-empt a process in the middle of an instruction). This meant you could put things like thread locking instructions in the PALCode and not require any intervention from the OS to run them. The VMS PALCode, for example, had a series of instructions for appending numbers to queues. These could be used to implement very fast message passing between threads (process some data, store it somewhere, then atomically write the address to the end of a queue) with no need to perform a system call (which meant no saving and loading of the CPU state, just jumping cheaply into a mode that could access a few more registers).

Re:The correct conclusion is more limited (2, Funny)

lukas84 (912874) | more than 7 years ago | (#15897724)

The IBM iSeries (identical to the pSeries hardware) also have a hardware HyperVisor.

Their entry models (10k US$) are slow as shit though. Can't say anything about the more expensive machine, but anything that requires around 12 hours to upgrade it's operating system can't be trusted.

Re:The correct conclusion is more limited (2, Interesting)

renoX (11677) | more than 7 years ago | (#15897771)

>The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

Note that the 'naive conclusion' and the 'correct conclusion' are not contradictory: I remember an article recently where it was shown that the Alpha had three times the power of a correspondig VAX, which made nicely the point that CISC is shit.

Now as Intel has shown, given enough efforts and money even x86 the poorest CISC ISA ever (VAX ISA was much nicer than x86 ISA: more registers, orthogonal design) can be competitive and sofware compatibility makes the rest..

Re:The correct conclusion is more limited (1)

gnasher719 (869701) | more than 7 years ago | (#15897833)

'' Now as Intel has shown, given enough efforts and money even x86 the poorest CISC ISA ever (VAX ISA was much nicer than x86 ISA: more registers, orthogonal design) can be competitive and sofware compatibility makes the rest.. ''

This was heavily discussed a while ago on comp.arch. Conclusion: VAX instruction set was an absolute nightmare for hardware designers; while today the problem of making x86 fast in spite of the instruction set is basically solved, making a VAX fast would have taken superhuman efforts.

Re:The correct conclusion is more limited (1)

renoX (11677) | more than 7 years ago | (#15898132)

I'm curious why making a VAX fast is such a problem?

Sure some VAX instruction such as 'list management' cannot really be made fast, but the x86 has also such kind of instructions, but those instructions are irrelevant, they can be trapped and handled by microcode, and the compiler writers avoid those instruction as they know that they are slower than doing it 'by hand'.

I would have thought the 16(if memory serves) orthogonal registers would have made a nice target for compilers, contrary to the ridiculous number of (non-orthogonal) registers on x86..

I smell a straw man... (2, Interesting)

itsdapead (734413) | more than 7 years ago | (#15897795)

The naive conclusion was that complex instructions are useless. The correct conclusion was that the original VAX compiler was a pile of manure.

Perhaps the intended conclusion was that it was feasible to write an efficient compiler using only a small, intelligently chosen with compiler optimization in mind, subset of the instruction set. Perhaps the fact that the original compiler was (as you assert) "a pile of manure" was not unconnected to the fact that it tried to achieve speed by exploiting the entire, eclectic, VAX instruction set (wonder how they worked the famous polynomial instruction in?) instead of sticking to a subset and applying generalised optimization techniques.

PS: If you think RISC lost the war, then remember that modern x86 processors consist of a RISC core with a translator stage to handle all those pesky, legacy CISC instructions.

hardware v/s software (2, Insightful)

toolz (2119) | more than 7 years ago | (#15897479)

When are people going to figure out that "hardware solutions" are really software running on hardware, just like any other solution?

Sure, the instructions may be hardcoded, coming out of ROM, or whatever, but in the end its instrructions that tell the hardware what to do. And those instructions are called "software", no matter how the vendor tries to spin it. And if the solutions performs badly, it is because the software is designed badly. Period.

Re:hardware v/s software (1)

Eideewt (603267) | more than 7 years ago | (#15897588)

I don't see how this is a significant distinction. The question, in terms you might prefer, is how virtualization using specialized hardware compares to doing the same thing in general purpose hardware. There doesn't seem to be any semantic difference. Are you just pointing out that a hardware implementation's performance is predicated on its design?

No not really (2, Informative)

Sycraft-fu (314770) | more than 7 years ago | (#15897629)

In the end, the software instructions are actually executed on hardware, and that hardware imposes limits on what they do. In the case of virtualization the problem comes with privlidge levels. Intel processors see 4 levels of privlidge called Ring 0-3, of which two are used by nearly all OSes, 0 and 3. The kernel and associated code runs in Ring 0, everything else in Ring 3. Now the effect of what ring you are in controls what instructions the processor will allow you to execute, and what memory you can access. So if software in Ring 3 tries to execute a certian instruction, the processor will just not do it, it'll generate a fault.

Virtulization software has to deal with this, when the computer it's virtualizing wants to execute such an instruction, it can't just hand it off to the processor, it has to deal with it itself, it has to translate it to instrucitons that can be executed and virtualize what happens, hence the name vitrualization.

The idea with hardware support like VT is that the processor itself will take a more active hand. Virtual machines will actually be able to execute Ring 0 instructions on the processor, because they won't really be running in the main Ring 0, it'll create a seperate isolated privlidge space for it.

A more simple analogy would be to think of basic math. Suppose you want to multiple two numbers and now suppose again that you have a processor that only has an add instruction. Well, you'd have to do the multiplication in software, as in you'd have to do an add loop. Now suppose that a new version of that processor adds a multiplication instruction, that actually commands a multiplication unit. Now you are doing it in hardware. It is not only less code, but faster because there's a dedicated unit for it.

It's not like companies just whack instruction on their CPUs for the fun of it, they command different parts of the hardware to do different things. SSE, 3DNow, etc don't just have the processor run little add or multiply loops, they actually kick on seperate sections of hardware, designed for SIMD. Hence why they get the results they do.

Re:No not really (1)

Ibag (101144) | more than 7 years ago | (#15897825)

Yes, it is all well and good that hardware virtualization gives you tools that allow you to do virtualization more efficiently. The problem is, why in these tests did software virtualization come out ahead of hardware virtualization? You can dispute the methodology as giving misleading or inappropriate results, but unless they are lying (which is not impossible), you still have the issue that software virtualization performed better.

Imagine a man with a computer and a man with a pen and paper both tasked with performing a complex calculation. The man with the computer can do everything that the man with just pen and paper can, and more. The man with the computer should be able to perform the calculation faster. If he does, you are happy. However, if he is slower, saying, "No, note really, because he has better tools, he wasn't actually slower" isn't going to change the results.

So, argue about the way the test was performed, or argue about why the results are the way they are, but don't try to explain why hardware virtualization is unequivocally better when experiment disagrees.

Re:No not really (3, Interesting)

Sycraft-fu (314770) | more than 7 years ago | (#15897876)

I haven't read the results, and I doubt I have the technical knowledge to properly analyze them properly. However if I were to guess as to why this might be the case I'd say it's because they didn't do it right. This is a new and fairly complex technology, I somehow doubt it's easy to get right on the first try.

I am not willing, based on a single datapoint, to make any conclusions. That's tanget to my point anyhow, my point was that doing something in hardware and software are quite different.

Do it again with UBUNTU !! It's faster and BETTER (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#15897489)

Do it again with UBUNTU !! It's faster, it's BETTER, than other Linux distro, and softiers Xp is not contest. Do it NOW !!

"It sounds to me like a hybrid approach may be th" (1)

l3v1 (787564) | more than 7 years ago | (#15897497)

It sounds to me like a hybrid approach may be the best answer

As so many times and so many cases before has it proven to be the optimal solution. What gives ? Good is that we have all these alternatives, and every vm company will try to evaluate, then optimize, which will lead to better performing software VMs, and because hw is slower to catch up, probably software VMs will be better for a while.

wrong (3, Insightful)

m874t232 (973431) | more than 7 years ago | (#15897519)

Hardware virtualization may be slower right now, but both the hardware and the software supporting it are new. Give it a few iterations and it will be equal to software virtualization.

It may or may not be faster eventually, but that doesn't matter. What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.

Re:wrong (1)

Eideewt (603267) | more than 7 years ago | (#15897593)

Yes, let's move it to our costly, proprietary, and complex hardware instead!

Not to say that you're wrong or that hardware should be free-as-in-freedom, but the irony was to great to resist.

Not just the CPU (4, Interesting)

kripkenstein (913150) | more than 7 years ago | (#15897610)

What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.

I am 100% in favor of cheap and open solutions. But I don't agree that this will soon be the case for virtualization. VMWare and the few other major vendors do a lot more than software virtualization of a CPU (which is all TFA was talking about). To have a complete virtualization solution, you need to also virtualize the rest of the hardware: storage, graphics, input/output, etc. In particular graphics is a serious issue (attaining hardware acceleration in a virtual environment safely), which from last I heard VMWare were working hard on.

Furthermore, Virtualization complements well with software that can migrate VMs (based on load or failure), and so forth. So, even if hardware CPU virtualization is to be desired - I agree with you on that - that won't suddenly make virtualization as a whole a simple task.

Re:Not just the CPU (1)

TheRaven64 (641858) | more than 7 years ago | (#15897742)

In particular graphics is a serious issue (attaining hardware acceleration in a virtual environment safely), which from last I heard VMWare were working hard on.

Actually, the people who have made the most headway are Microsoft. The Vista driver model is designed for support for virtualisation in mind. This means that the OS has access to video driver commands for things like saving and restoring GPU state. As far as I know, other operating systems currently lack this; Linux has a problem even switching between virtual consoles; if you switch to one running X then it typically switches to a simple visa mode then reinitialises the driver and redraws the screen. This kind of 'solution' (read: hack) would not work for transparent visualisation. The problem is that many existing GPUs (and most older graphics cards) were not designed with virtualisation in mind, and so don't even have functionality around for saving and restoring the state; you need to do it at a higher level, such as by storing an OpenGL state. At the moment, the best solution for 3D support in Xen is to give the GPU exclusively to the domain 0 machine and use GLX with XDMP; since X11 and OpenGL were designed with network transparency in mind, you can make use of this by having the VMs issue high-level OpenGL commands and have the host machine execute them on the hardware.

Of course, this only works for platforms that use OpenGL and X11 (i.e. not Windows or OS X). It also requires your domain 0 host to provide accelerated indirect OpenGL rendering. This is present in the nVidia blob drivers, and support for a few other cards is in the CVS -HEAD branch of x.org, but it's not stable yet.

Re:wrong (1)

ocbwilg (259828) | more than 7 years ago | (#15897979)

Hardware virtualization may be slower right now, but both the hardware and the software supporting it are new. Give it a few iterations and it will be equal to software virtualization.

It may or may not be faster eventually, but that doesn't matter. What matters is that small changes in the hardware make it possible to stop having to depend on costly, proprietary, and complex software--like that sold by VMware.


Maybe I'm crazy, but I just don't see that happening anytime soon in the mainstream. When they talk about "hardware-based" virtualization, they are really talking about "hardware assisted" virtualization, in that the CPU has some features built in to assist with accelerating virtualization. There still needs to be some sort of host OS or software (call it a hypervisor, mini-kernel, whatever) that provides access to the rest of the hardware (storage, memory, etc) and manages accesses by the guest OSes. What would it take to do all of that in hardware? My guess is new kinds of memory, storage, etc that also support virtualization, or a BIOS that actually manages it for you, but a BIOS is just software anyway.

I seriously doubt that we'll get to a purely hardware virtualization ever (for hetergenous operating systems), if for nothing other than the fact that there are so many potential issues with guest operating systems that the hypervisor/host OS needs to handle.

This is why hypervisors rule (1)

swbrown (584798) | more than 7 years ago | (#15897528)

Compare the around 2% impact of running an OS built to be virtualization-friendly, like with Linux + Xen, to that of software/hardware solutions to virtualize unfriendly OSes. Massive difference. So, it makes sense to migrate whatever services you're running on Windows to Linux before moving to a virtualized deployment, as you'll save a bundle.

Re:This is why hypervisors rule (2, Insightful)

rwhiffen (141401) | more than 7 years ago | (#15897555)

I don't see how that tracks. How is the %2 impact going to save me a bundle? Moving to linux suposedly will save me money if I virtualize or not, don't see how it being virtualization friendly improves things. Are you saying I'll spend less in hardware by switching to linux? Migrating to linux isn't free (man-hours wise), so the hardware savings better be pretty damn substantial to offset it.

I should be sleeping.

Rich

Re:This is why hypervisors rule (0)

Anonymous Coward | more than 7 years ago | (#15897577)

The point is that if you want to host virtual servers, a virtualization-aware system like a patched copy of Linux will have 2% overhead, but an unpatched Linux or Windows will have 20% overhead. You're better off running Linux in this case.

Re:This is why hypervisors rule (1)

ArbitraryConstant (763964) | more than 7 years ago | (#15898055)

VMWare's style of virtualization, whether it works with hardware support or not, involves trapping privileged operations performed by the guest OS. Xen's style of virtualization tweaks the guest OS to pass on requests for privileged operations by more conventional means, and that is significantly faster.

Re:This is why hypervisors rule (1)

rwhiffen (141401) | more than 7 years ago | (#15898189)

Great, but per the parent, how does that save me bundles? I can see savings, being able to pile more VM's on the hardware, but just don't see how we get from a few extra VM's to bundles. Perhaps the OP was talking about the costs of VMWare ESX vs Xen.

VMWare's OS awareness of Windows has advantages. Being able to load one copy of a static DLL shared across VM's can save you bundles of physical ram...

Speed is only one factor (0)

Anonymous Coward | more than 7 years ago | (#15897553)

There's more to it than just the speed. What the virtualization requires from the OS being virtualized, security features reached, simpleness of virtualization engine (simple code == statistically less bugs) and so on. You also have to keep in mind that this is just the FIRST generation of hardware virtualization (on present x86 platforms). It will develop, and a lot. VmWare's virtualization software has been developed and polished for years so the results are not surprising at all.

Use Paravirtualization (3, Insightful)

graf0z (464763) | more than 7 years ago | (#15897560)

Paravirtualization (running hypervisor-aware guest kernels, eg patched linux on xen) is faster than both, binary translation and "full" virtualization. And you don't need CPUs with VT extension.

g

Re:Use Paravirtualization (1)

interiot (50685) | more than 7 years ago | (#15897636)

The whole point of virtualization is so you can run your favoriate OS most of the time, and only switch over to Windows when you want to run games, isn't it?

Well, I suppose if choose to stay in the Windows world most of the time, the whole point of VM is to try to keep malware off your computer... But either way, you're not getting a FLOSS paravirtualized Windows kernel any time soon.

Re: Use Paravirtualization (1)

graf0z (464763) | more than 7 years ago | (#15897661)

Eerrr ... tautologically true, yes: if there is no paravirtualized version of the OS you want to use, paravirtualization is not an option. But there are many scenarios where you are only interested in running lots of paravirtualizable unixish OSes, eg server farming.

Your windows desktop is not the whole point.

g

Re: Use Paravirtualization (2, Insightful)

interiot (50685) | more than 7 years ago | (#15897702)

It just seems like many people who try to move away from Windows seem to want to at least have the option to use Windows once in a while.... The Mac-moving-to-Intel thing was met with a lot of excitement because of this, a lot of linux people seem to say this, and it seems like in a lot of companies employees must be productive with specific document formats. Certainly Windows isn't the only point of virtualization, but it seems like it's a really big one, especially for desktop users.

Complain to Microsoft Loud and Clear. (0)

Anonymous Coward | more than 7 years ago | (#15897871)

They have the drivers for Xen for Windows XP and 2003. They just don't release them.

Chicken and the egg (0)

Anonymous Coward | more than 7 years ago | (#15897580)

Ultimately it's software's fault. Hardware is the bedrock without which the software is irrelavant. The biggest potential gains will always come from hardware since the software obtains it's potential from hardware. The software gains may seem the most impressive but they will always be based on hardware. For the end user the biggest gains in the end will come from improving hardware since this will determine what is even possible in software.

dual boot (1)

Bizzeh (851225) | more than 7 years ago | (#15897618)

if vmware/qemu/Bochs/virtual pc, isnt good enough for the speed you need, just dual boot the OS's or if you want many servers on one machine, both IIS and Apache offer virtual hosts...

Re:dual boot (1)

buraianto (841292) | more than 7 years ago | (#15897960)

Dual booting is not the same as virtualization. Virtualization allows multiple operating systems, with their own sets of programs, to run concurrently, allowing for a better utilization of the hardware. These may be the same operating systems or different. Dual booting only allows one operating system to run at any one time. You don't get the benefits of physical resource sharing.

Re:dual boot (1)

Bizzeh (851225) | more than 7 years ago | (#15898098)

i know what dual booting and virtualisation is, what im saying is, in most cases, if vmware isnt good enough, just run the two or more operating systems seperatly

Look to IBM (2, Informative)

dpilot (134227) | more than 7 years ago | (#15897665)

IBM has been shipping virtualization since before many of these newcomers were even born. What do you think the 'V' in MVS or VM stands for? I wonder how well IBM's expired patents compare to modern virtualization. Of course in this case it helps to own the hardware, instruction set, and operating system.

Re:Look to IBM (3, Interesting)

pe1chl (90186) | more than 7 years ago | (#15897699)

IBM's VM also started as a software product that had to cope with virtualisation problems in the hardware.
Just like what is happening now, they added specific support to the hardware to make VM perform better.
This all happened before the development of today's architectures, but in the early days of microcomputing, IBM had the position that Microsoft has today: they were the big company that had 90% of the market, and in the eyes of the newcomers all they did was by definition the wrong thing. So nobody would bother to look at 360 mainframes, VM and how it was done before designing their own processor.
(this would be similar to telling a Linux geek to look at how certain problems are solved in Windows... it is Windows, it is Microsoft, so it has to be the wrong solution)

Re:Look to IBM (1)

Bozdune (68800) | more than 7 years ago | (#15897824)

True. But actually IBM's experience is a pretty accurate analog to this thread.

VM370 was a dog. Why? Because they relied on hardware traps and software simulation of CCW's (channel command words), to run the host operating system "perfectly."

A hack to this, used by National CSS and other timesharing vendors (because, remember that CP/CMS was open source software and VM370 was just one implementation of it), was to replace CCW's inside CMS with specific traps for OS services. The result was that National CSS could run 250+ users on VP/CSS (its version of CP/CMS) whereas IBM could manage only 70 on the same hardware.

Smart software beats smart hardware every time.

CMS/370 (1, Interesting)

Anonymous Coward | more than 7 years ago | (#15897887)

IIRC, CMS used diagnose (VM's version of a system call) to perform synchronous i/o. The whole point of CMS was to leverage off of the hypervisor as much as possible and avoid having to do a full blown OS. That's why CMS didn't use virtual memory to avoid shadow table maintainance by the hypervisor, since CMS knew its real memory was actually virtual memory being managed by the hypervisor.

You really want to look at the other guest OSes, like MVS, and what VM did to manage performance on them. Things like various microcode vm assists and dedicating hardware to guests so no virtual to real hardware translation had to be performed.

I think that's a little innacurate (3, Insightful)

Sycraft-fu (314770) | more than 7 years ago | (#15897902)

It's not that people don't look to old mainframe solutions for things, they do, it's that often what was feasable on those wasn't on normal hardware, until receantly. There was no reason for chip makers to waste silicon on virtualization hardware on desktops until fairly receantly, there just wasn't a big desktop virtualization market. Computers are finally powerful to the point that it's worth doing.

It's no supprise that large, extremely expensive computers get technology before home computers do. You give me $20 million to build something with, I can make it do a lot. You give me $2000, it's going to have to be scaled way back, even with economies of scale.

You see the same thing with 3D graphics. Most, perhaps even all, the features that come to 3D cards were done on high end visualizaiton systems first. It's not that the 3D companies didn't think of them, it's that they couldn't do it. The orignal Voodoo card wasn't amazing in that it did 3D, it was much more limited than other thigns on the market. It was amazing in that it did it at a price you could afford for a home system. 3dfx would have loved to have a hardware T&L engine, AA features, procedural textures, etc, there just wasn't the silicon budget for it. It's only with more developments that this kind of thing has become feasable.

So I really doubt Intel didn't do something like VT because they thought IBM was wrong on the 360, I think rather they didn't do it because it wasn't feasable or marketable on desktop chips.

Re:Look to IBM (2, Interesting)

TheRaven64 (641858) | more than 7 years ago | (#15897749)

IBM contribute to Xen. I was at a talk last year by one of the IBM Xen guys. He made the point that IBM has a real advantage in virtualisation because, when they get stuck, they can pop along the hall to the grey-bearded mainframe guys and say 'hey, you remember this problem you had twenty years ago? How did you solve it?'

Missing the point (1, Interesting)

Anonymous Coward | more than 7 years ago | (#15897681)

Software virtualization is modifying the guest OS so it can run in a virtual machine, either before the fact with OS and version specific patches, or with run time translation. Both of these techniques can be problematic. Also it requires companies by VMWARE to hire experts in the kernels of all the OSes they plan to support. Contrast that with IBM's mainframe VM which didn't require much knowlege of its guest OS internals, just the hardware architecture.

seems like such a waste (0, Troll)

FudRucker (866063) | more than 7 years ago | (#15897763)

seems like such as waste of resources, why not just port the software over to the other OS/platform so it can run natively...

And then there's paravirtualization (2, Interesting)

Anonymous Coward | more than 7 years ago | (#15897882)

I don't doubt their numbers, they've been creating virtualized systems very effectively for years.
I think that any kind of "full virtualization" is going to be subject to these issues. If you want to see performance improvements then you should modify the guest os.

VMware's BT approach is very effective and their emulated hardware and bios are efficient, but that won't match the performance of a modified OS that KNOWS it's virtualized and cooperates with the hypervisor rather than getting 'faked out' by some emulation.

AMD Engineer told me the same thing (1)

DrDitto (962751) | more than 7 years ago | (#15897889)

A friend of mine works at Intel, and he flat out told me (several months ago) that Vanderpool/Pacifica will be slower than VMWare-only for the 1st generation. However this will change in a few years.

Re:AMD Engineer told me the same thing (1)

DrDitto (962751) | more than 7 years ago | (#15897894)

Yes, my subject contradicts (oops!). I know both AMD and Intel guys. I believe it was the Intel guy who mentioned this.

VMWare get rid of all their sales guys? (1)

SIPVoIP (810852) | more than 7 years ago | (#15897988)

I am planing the network for a large VoIP provider and have been looking at Xen and VMWare. I have tested our VoIP applications such as BroadSoft on Xen and ran into some big problmes such as duble page fault that kills the whole box. I have talked with VirtualIron and they have found the same issues with Xen and say they have patches for their implimentation, but their beta is full and I can't test for 30 days. I have tried to get a VMware sales guy to call me, left 2 messages and emailed twice with no responce, funney since they want about 4X the cost of VirtualIron.

Re:VMWare get rid of all their sales guys? (0)

Anonymous Coward | more than 7 years ago | (#15898177)

I am planing the network for a large VoIP provider and have been looking at Xen and VMWare. I have tested our VoIP applications such as BroadSoft on Xen and ran into some big problmes such as duble page fault that kills the whole box. I have talked with VirtualIron and they have found the same issues with Xen and say they have patches for their implimentation, but their beta is full and I can't test for 30 days. I have tried to get a VMware sales guy to call me, left 2 messages and emailed twice with no responce, funney since they want about 4X the cost of VirtualIron.

It is almost like you ran a regular land line call through a VoIP provider

best platform anyway? (1)

Triode (127874) | more than 7 years ago | (#15898011)

Ok, I see all of this virtualization going on, but I keep thinking about a burning question...
The x86 iunstruction set and architecture was invented some time ago (286, Intel, 1982),
and although it has been added to and improved upon by both AMD and Intel, one has to wonder
if it is the correct platform for performing virtualization, or for virtualization in general.

I mean, ok, it is a slightly big deal to design a new CPU (and I know, I took all the Ph.D.
courses on CPU design), but think about it. We are trying to make a nice old (vintage? classic?)
CPU and instruction set good for virtualization. I think now is the time to step back and
say "hey, we can do better. Lets get a bunch of good CPU designers and _thinkers_ (call Google?)
to design an architecture that works well for virtualization, then port linux to it" Ok, we
can invite Tannenbaum too and port Minix. Maybe call plan9 too.

At any rate, the point is if we are going to really use virtualization, lets do it 100% and not
half-ass like we always tend to.

Oh, wait up, we should just call IBM. They have been virtualizing for years. Lets get them
to design us a good high speed cpu for that.

Re:best platform anyway? (0)

Anonymous Coward | more than 7 years ago | (#15898097)

During your PhD studies, did you take any economics classes that might give you the grounds to ask whether the investment if worth the cost in that case?

Sorry to be snide; but I don't get whether you're saying we should scrap the entire investiment in x86 to get... how much % improvement?

I mean, we already have more elegant solutions... but people are trying to virtualize x86.

Parallels on Mac OS? (2, Interesting)

akac (571059) | more than 7 years ago | (#15898080)

Well OK. But it could also mean that VMWare doesn't know yet how to properly create a hardware virtualized vm.

Parallels on OS X switches between software and hardware virtualization and using hardware virtualization its about 97% the speed all around of native hardware (consider that virtualization on current Yonah CPUs is equal to one core only). Software virt on Parallels is much slower - on par with running Windows Virtual PC on the same box using Windows XP (not Mac Virtual PC).
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...