Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virtualizing Cuts Web App Performance 43%

kdawson posted more than 7 years ago | from the price-to-pay dept.

Operating Systems 223

czei writes "This just-released research report, Load Testing a Virtual Web Application, looks at the effects of virtualization on a typical ASP Web application, using VMWare on Linux to host a Windows OS and IIS web server. While virtualizing the server made it easier to manage, the number of users the virtualized Web app could handle dropped by 43%. The article also shows interesting graphs of how hyper-threading affected the performance of IIS." The report urges readers to take this research as a data point. No optimization was done on host or guest OS parameters.

cancel ×

223 comments

Sorry! There are no comments related to the filter you selected.

more like GAYSP (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18526781)

amiright?

Re:more like GAYSP (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18526805)

no, you aren't.

-AC

Virtualize this (3, Insightful)

Anonymous Coward | more than 7 years ago | (#18526789)

That is all very well, but we all KNOW apps slow down when we run them in a VM. What difference does it make to the average n00b who wants to watch funny videos [digg.com] and check their email? Anyone using computers for serious numbercrunching obviously won't virtualize anyway. No big deal

Re:Virtualize this (4, Interesting)

Fordiman (689627) | more than 7 years ago | (#18526835)

I do like the idea of a variably sized beowulf cluster running a floating number of package (LAMP) servers. Get more clients? Add more VLAMPs. Things slowing down? Add more hardware.

You still take performance hits, but if you can scale your system by just adding cheap commodity systems, that works. Plug it in, boot it off a CD, and let the Cluster take control.

holy cow am I a nerd (5, Funny)

thegnu (557446) | more than 7 years ago | (#18527071)

I do like the idea of a variably sized beowulf cluster running a floating number of package (LAMP) servers. Get more clients? Add more VLAMPs. Things slowing down? Add more hardware.

I started getting aroused as I read your post. This is highly disturbing.

Re:holy cow am I a nerd (1)

Fordiman (689627) | more than 7 years ago | (#18527133)

A sister of my department here actually does this for its academic hosting. Though, it's a grid, not a cluster.

Re:holy cow am I a nerd (0, Funny)

Anonymous Coward | more than 7 years ago | (#18527241)

Is your department's sister hot?

Re:Virtualize this (1)

eno2001 (527078) | more than 7 years ago | (#18527325)

Not to mention the fact that paravirtualization as well as hardware assisted virtualization like Xen offers (and later Longhorn) really cut the performance issues WAY the hell down. With a system like Xen you get very close to bare metal speeds since there is no such thing as a "host OS" to get in the way.

IIS can't be paravirtualized (2, Informative)

tepples (727027) | more than 7 years ago | (#18527515)

Not to mention the fact that paravirtualization as well as hardware assisted virtualization like Xen offers (and later Longhorn) really cut the performance issues WAY the hell down.
Paravirtualization also requires a free software OS kernel, and IIS-only web applications are do not yet run on any free software kernel. (ReactOS is nowhere near mature enough.) Any virtualization also requires more OS licenses and higher-class, more expensive OS licenses. Or do you claim that all web app developers should drop IIS-only frameworks immediately, and all enterprises that rely on IIS-only web applications should drop their mission-critical IIS-only web applications immediately?

Re:IIS can't be paravirtualized (0, Troll)

BLQWME (791611) | more than 7 years ago | (#18527613)

For the love of humanity, yes...

Re:IIS can't be paravirtualized (1)

rolfc (842110) | more than 7 years ago | (#18527877)

Nobody say they should drop them immediatly, we are just saying that free software rules and closed source sucks. Then everyone has to draw their own conclusions from that.
I do not understand people that make mission-critical IIS-only webapplications, isnt that just stupid? Smart persons make sure that there are some emergency exit.

Re:Virtualize this (1)

neerolyte (878983) | more than 7 years ago | (#18527117)

The DET (Department of Education and Training in ACT, Australia) struck some deal up with MS to let them run a number of 2003 licences on one box for the cost of one (I have no link, I was working in a school at the time and was actually talking to people who were managing this, not just reading it off the web). Their plan was to virtualize in just this way... you want DNS? Ok that's one VM. What about AD? Ok another...etc etc. I think 7 was about the magic number. I don't know how far they got with this (I've quit since). It may be obvious to the average n00b but unfortunately the average n00b is smarter than many people working for government organisations.

Re:Virtualize this (1)

certain death (947081) | more than 7 years ago | (#18527447)

Yeah, that would explain why IBM does it on Big Iron...No one with a Main Frame is doing any kind of serious work.

Well, (4, Insightful)

Fordiman (689627) | more than 7 years ago | (#18526807)

Duh.

Seriously. I don't know who gave anyone the impression that virtualization was a performance booster. Management improver? Sure. Stability insurance? Why not? But if you don't get that virtualizing your servers imposes a bit of overhead, then you're probably not paying attention.

I especially love the idea that running different types of server virtualized on the same machine is a good idea; the idea of virtualization of multiple servers is to distribute the load. If you have, say, ftpd, httpd and mysqld running as their own virtualized systems, they will all get hit *simultaneously*.

Again. Duh.

Re:Well, (-1, Offtopic)

Fordiman (689627) | more than 7 years ago | (#18527067)

Wow. A "Well, DUH" post I wrote got modded insightful. How's that for irony.

Guys, the button you're looking for is 'Redundant'.

Re:Well, (5, Insightful)

Mr. Underbridge (666784) | more than 7 years ago | (#18527157)

Seriously. I don't know who gave anyone the impression that virtualization was a performance booster. Management improver? Sure. Stability insurance? Why not? But if you don't get that virtualizing your servers imposes a bit of overhead, then you're probably not paying attention.

Well, I think the point was that he attached an actual number to the amount of the performance hit, which is relevant. That's called research; quantifying and proving that which seems 'obvious'.

Re:Well, (2, Interesting)

Fordiman (689627) | more than 7 years ago | (#18527221)

Well put. But I do know a number of people in the industry that will be shocked by this, which was who I was referencing.

But really. If you've got the money for the extra hardware to maintain performance, I say go for the virtualization, if only to make yout IT guys' lives easier (happy IT is useful IT).

Re:Well, (5, Insightful)

hey! (33014) | more than 7 years ago | (#18527311)

Well, it's not a surprise, but it's probably worth quantifying.

Here's a question: what is more available: hardware or skilled system administrators? Obviously hardware.

Here's a common scenario: you've set up a system to provide some useful package of services. How do you let other people duplicate your success? (1) tell them what hardware they need and (2) have them install and configurethe software on their hardware. Guess which item involves the most cost in the long run?

The hardware is easy; the greatest barrier and cost is the process of installing and configuring the software. That's one place a virtual machine is worth considering in producation systems. You aren't going to use something like VMWare in one-of-a-kind production systems. You're going to use it when you need to clone the same set up over and over again. This is very attractive for application vendors, who spend huge amounts of support on installation and tracking down compatibilty conflicts.

Another application would be an IT department that has to support dozens of more or less identical servers, especially if they are frequently called upon to set up new servers. If I had a choice, I'd use Linux virtualization on a midrange or mainframe, but if those servers must be Windows servers, then I'd be looking at some kind of cluster with SAN. This is not really my area of expertise, but we're talking high end boxen for x86; if the typical server didn't need 100% of box, then I have three choices: waste server bandwidth (expensive), force groups to share servers (awkward and inflexible; what if I have to separate two groups joined at the hip?), and virtualization.

Naturally if you are virtualizing production servers, you need to scale your hardware recommendation up to account for VM overhead.

What would be very interesting is a study of the bottlenecks. If you are considering a system with certain characteristics (processor/processors, memory, storage/raid etc) and you have X dollars, where is that best spent?

Re:Well, (0)

Anonymous Coward | more than 7 years ago | (#18527361)

I completely agree with the Duh. But just wanted to add that I have (even very recently) had to talk several of my customers out of doing this on PRODUCTION level high-volume database, application, and web servers! I even had one customer that was insisting that we install all of these machines (total of 6 VM's, I think) onto a single host system using VMWare. They had VMWare reps come in and give them a presentation on how great this would work for them and how performance would not be degraded in any way.

Fortunately, Google Search came to the rescue and I was able to produce benchmarks and reviews such as the one that triggered this discussion. These studies seem like a complete waste of time to those of us that know better, but they can also come in handy...

I already knew that (0)

Anonymous Coward | more than 7 years ago | (#18526815)

Isn't it obvious? I already knew that performance would drop with Windows OS is hosted on Linux. It isn't WMware fault.

Only on Windows (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18526821)

First post at last :)

Bogus Test (5, Informative)

Anonymous Coward | more than 7 years ago | (#18526831)

Who uses VMWare Server in a production environment anyway? We run all of our Web services, Exchange servers and SQL databases in VMWare's Virtual Infrastructure 3. VMWare Player and Server are only ment for lab evironments and low load applications. VMWare even says as much on their website. Either this is just FUD or the author is an idiot. In other news water is wet.

Re:Bogus Test (1)

EraserMouseMan (847479) | more than 7 years ago | (#18527061)

So does VMWare's Virtual Infrastructure 3 perform much better? Or is it just more manageable setup and config wise? Sorry, I'm fooling around with VMWare Server and am a noob on the issue.

Re:Bogus Test (1)

T-Ranger (10520) | more than 7 years ago | (#18527285)

"It must"... VMWare ESX runs on the bare hardware, i.e. no "host" OS. GSX ("Server" now) runs on top of a host OS. Even if it is a very thin, and low overhead layer, the host OS is in the way...

Re:Bogus Test (1)

tji (74570) | more than 7 years ago | (#18527317)

It uses a thin "hypervisor" layer os the Host OS, rather than a general purpose OS. The hypervisor is built to host virtual clients, so I would expect it to provide better performance than doing it on Linux. But, in any virtualization environment, there will surely be some overhead, and the performance will be lower than the client OS on raw hardware.

Re:Bogus Test (1)

nharmon (97591) | more than 7 years ago | (#18527349)

Yes it does. ESX (aka Virtual Infrastructure) the hypervisor (thing that the virtual machines run on) is the base OS rather than an application running on a host OS like with Vmware Server or Workstation/Player. As a result the overhead is extremely low. But I'm not sure that is really relevant. Virtualization is appropriate when your application is not using your iron's full potential. If you virtualize an application that is already maxing our processor/memory, you are not following the published best practices.

Re:Bogus Test (5, Insightful)

sammy baby (14909) | more than 7 years ago | (#18527451)

Yes: it performs much, much better.

VI3 is actually a suite of products. At the heart is VMware ESX Server [vmware.com] , which is actually an operating system in its own right: it runs "on the metal," without having Windows or Linux installed already on the system. It also has a service console operating system which looks suspiciously like a *NIX style operating system, so you can SSH directly to the system, cd into your /vmfs directory and, say, scp disk files over the network. If you wanted to.

However, as a pretty damn safe rule of thumb, no system is going to run faster on equivalent hardware after being virtualized. In a prior job where I was often asked to provide development/test systems, I got phone calls from a lot of people who were bitten hard by the virtualization bug. Whenever someone brought up any issue having to do with infrastructure, no matter how odd or off the wall, they wanted to push virtualization as a solution. I had to explain to them that if your problem is that a web server is slow, the answer isn't to install VMWare server on it, set up two host operating systems, and say, "There! Now I have two web servers." You'd be surprised how pervasive that sort of thinking is, even among people who should patently know better.

Another useful guideline: various types of services are impacted differently by being virtualized. Generally, the best candidates for virtualization are ones that spend a lot of time idle. This is actually more common than you might think - people need a server set up for something, can't put it on a pre-existing system for security/compatibility reasons, so they go out and buy a new system which is ten times more powerful than they need. You can put a lot of these kinds of systems on a single, reasonably powerful ESX server. On the other hand, systems that heavily tax available hardware, especially I/O, are usually much harder to deal with.

Re:Bogus Test (1)

yankeessuck (644423) | more than 7 years ago | (#18527065)

The funny thing is that the company I'm working for is in the process of migrating our production internal web servers to VMWare. Our infrastructure director spoke at a meeting of our development group and swore that there's no performance impact He actually said that with a straight face but we all knew he was full of it. I thought about it a bit and decided to let him virtualize my servers because it'll be his job on the line when it tanks, not mine.

Re:Bogus Test (5, Informative)

Sobrique (543255) | more than 7 years ago | (#18527161)

Actually, the company I worked for 6 months back, one of the projects I was involved in was 'VMWare'. Production stuff running on on the ESX servers (which became 'virtual infrastructure') in our datacentre, as a cost effective scalable environment. Yes, we weren't getting 'uber performance' but then again, we were running 150 or so VMs on an 6 server VMWare farm.

One of the other things we prototyped and deployed was 'site services packages' - get GSX (now VMWare Server), stick it on a pair of 2U servers, and attach a storage array to both of them. Then create your 'template' fileserver, DHCP server, print server, proxy, that kind of thing and deploy them to this package. It worked very well indeed - you get a whole new order of magnitude on stability (although to be fair that's in part because we through away the crappy workstations that were doing the 'low intensity' stuff) and was extremely managable, and trivially replacable in the event of a hardware failure.

Performance? No, VMWare isn't that great on performance - whilst it's not bad, in an ideal situation, fundamentally what you are doing is introducing an overhead on your system. And probably contention too. But it's really good at efficient resource utilisation, easy manageability and maintainability.

As an experienced sysadmin, my reaction is screw performance. Let's start with reliable and scalable, and then performance just naturally follows, as does a really high grade service.

Proactive laziness is a fundamental of systems admin. Your job, is essentially to put yourself out of a job - or more specificially, free up your time to play with toys. The best way to do this is build something stable, well documented and easily maintainable. Then your day consists of interesting stuff, punctuated by the odd RTFM when something doesn't work quite right.

Re:Bogus Test (1)

morgan_greywolf (835522) | more than 7 years ago | (#18527199)

Actually, I've worked for a place that uses VMWare on some of its production servers. They spend really ridiculous amounts of money on a really big server with a bunch of CPUs (one has at least 96), and then use VMWare ESX Server to run multiple virtual servers on the same box. It's actually a good approach, since ESX server uses hypervisor virtualization, which gives you much lower overhead than traditional virtualization while giving finer-grained control over the resources each virtual server gets.

It's actually not a bad approach, and even works well for servers with a lot of load. I'd like to see *that* technology tested rather than plain-ol' VMWare Server.

Re:Bogus Test (2, Insightful)

afidel (530433) | more than 7 years ago | (#18527407)

Here's [google.com] a little spreadsheet I created to do a cost/benefit analysis for Vmware ESX. There are some assumptions built in, and it's not yet a full ROI calculator, but it gets most of the big costs. Cell A1 is the number of our "standard" systems to be compared (4GB dual cpu 2003 machines). The DL580 is 4xXeon 7120 with 32GB of ram, local RAID1 on 15k disks, dual HBA's and a dual port addon NIC. The DL585 is 2xOpteron 8220HE with 32 or 64GB of ram (the 580 with 64GB was more expensive than buying two with 32GB!) and the same equipment. The 360 is our standard build currently, dual 5110's with 4GB ram and local RAID1 and an HBA.

The interesting thing is the breakeven point for VMWare is only 12 servers, way, way below what you can put on two of those boxes. VMotion is the killer feature for me so less than 2 servers is stupid for my situation.

Xen (1)

Verte (1053342) | more than 7 years ago | (#18527251)

Or better yet, if you need better performance under load, why not Xen? I expected to see something in there on paravirtualisation, but nothing.

Not a trusted source (0)

Zebra_X (13249) | more than 7 years ago | (#18526837)

"Virtualized machine (hyperthreading disabled)
Disabling hyperthreading had negligible effect on the virtualized machine. In our test, the capacity increased a tiny amount to 403 simultaneous users. The difference between this result and the virtualized machine with hyperthreading enabled, however, is smaller than the margin of error for these tests -- more testing would be required before concluding the performance was better in the virtualized machine with hyperthreading disabled. "

The hyperthreaded capacity was 350. So my question is how is a 15% gain in clients served a "tiny amount"?

Re:Not a trusted source (3, Informative)

dagenum (580191) | more than 7 years ago | (#18526947)

The hyperthreaded capacity was actually 390 so a 3% gain.

Re:Not a trusted source (1)

maxwell demon (590494) | more than 7 years ago | (#18526993)

Since it's smaller than their error margin, not claiming it to be a tiny amount would mean to admit that there were large measurement errors. But in any case, being smaller than the error margin means that comparing the numbers is completely meaningless; in reality it might not have had any effect, or it might even have been a loss instead of a gain.

Re:Not a trusted source (1)

basneder (591273) | more than 7 years ago | (#18526995)

The hyperthreaded capacity was 350

Only it wasnt, it was 390.

Re:Not a trusted source (1)

neongrau (1032968) | more than 7 years ago | (#18527021)

it was the font size that was so tiny you couldn't even read the numbers correctly.

not your fault! ;)

Re:Not a trusted source (1)

Zebra_X (13249) | more than 7 years ago | (#18527079)

Mod me down!

It's a little early here and my vision was a bit blurry, it is 390 not 350!

Maybe a neutral negative mod is needed (0)

Anonymous Coward | more than 7 years ago | (#18527579)

In the same way as +1 Funny doesn't change your karma, maybe a -1 Retracted or -1 Incorrect that doesn't knacker your karma is needed too...

Re:Maybe a neutral negative mod is needed (0)

Anonymous Coward | more than 7 years ago | (#18527635)

I'd like the retracted, if only you could mod your own posts down at any time. The -1 incorrect should hit karma.

This is VMware Server and not ESX Server (5, Informative)

Fuyu (107589) | more than 7 years ago | (#18526845)

They performed the test on VMware Server not VMware ESX Server which is what most enterprises will use. VMware ESX Server runs on "bare metal", so it does not have the overhead of the host operating system.

Re:This is VMware Server and not ESX Server (1, Informative)

dc29A (636871) | more than 7 years ago | (#18526927)

They performed the test on VMware Server not VMware ESX Server which is what most enterprises will use. VMware ESX Server runs on "bare metal", so it does not have the overhead of the host operating system.

Doesn't VMWare ESX run on some modified Red Hat version?

Also, we run ESX in our production environment, when we stress tested a web application running on IIS and with ASP/VB, the ESX machine couldn't give us more than 10 transactions per second (there was one single VM running on ESX). ESX was crawling.

The same hardware running on Windows 2003 native gave us an easy 100+ without any problems. It seems that the overhead of ESX combined with huge number of context switches is what kills the performance of ESX. For non web applications like file servers, administrator consoles and whatnot, ESX is a beauty and great money saver.

For web applications, I would avoid ESX like the black plague.

Re:This is VMware Server and not ESX Server (2, Informative)

Fuyu (107589) | more than 7 years ago | (#18526997)

Yes VMware ESX Server runs a modified version of Red Hat Linux.

According to Wikipedia [wikipedia.org] , "VMware ESX Server uses a stripped-down proprietary kernel (derived from work done on Stanford University's SimOS [stanford.edu] ) that replaces the Linux kernel after hardware initialization. The Service Console (also known as "COS" or as "vmnix") for ESX Server 2.x derives from a modified version of Red Hat Linux 7.2. (The Service Console for ESX Server 3.x derived from a modified version of Red Hat Enterprise Linux 3.) In general, this Service Console acts as a boot-loader for the vmkernel and provides management interfaces (CLI, webpage MUI, Remote Console). This VMware ESX hypervisor virtualization approach provides lower overhead and better control and granularity for allocating resources (CPU-time, disk-bandwidth, network-bandwidth, memory-utilization) to virtual machines. It also increases security, thus positioning VMware ESX as an enterprise-grade product."

Re:This is VMware Server and not ESX Server (1)

Sobrique (543255) | more than 7 years ago | (#18527193)

I've seen the same thing - just doesn't like context switches. Also IO intensive stuff, tends to start hurting too. But then it's not like I only have one tool in my toolkit :). ESX is a tool for a job, and IMO very good at it. It's not the magic bullet that lets you do anything though.

Re:This is VMware Server and not ESX Server (1)

Anonymous Coward | more than 7 years ago | (#18526987)

VMWare ESX runs on top of a modified Linux kernel with some VMWare specific modules and drivers for things like VMFS. This striped-down Linux does have a much lower overhead than running something like VMWare Server on top of a bog standard RedHat ES installation, but it isn't "bare metal".

Re:This is VMware Server and not ESX Server (3, Informative)

Anonymous Coward | more than 7 years ago | (#18527525)

ESX Server still gives you a base 40% performance hit. I run a ~600 VM farm under VI3 and our performance on Apache fell from 15000 requests/s (mostly static content) to 5000. That was during a load test with one single virtual machine running on the blade. The same load test using IIS went from 13000 to 9000. Also a huge performance hit, although not quite as bad as on Linux. And before anyone says anything, I'm a linux tech and I was somewhat deprssed about the results, to our windows techs great joy.

VMware Server 1.0.1??? (0, Redundant)

slapyslapslap (995769) | more than 7 years ago | (#18526869)

Come on. VMware has come a LONG way. If they are not using at least VMware ESX 3, then this is not a valid test.

Re:VMware Server 1.0.1??? (2, Informative)

Fuyu (107589) | more than 7 years ago | (#18526921)

VMware Server 1.0.1 is their free virtualization product that runs on a host OS (linux or Windows). Most enterprises will use VMware ESX Server 3 with the VMware Virtual Infrastructure 3 series of products as it runs on "bare metal" and does not have the overhead of the host OS.

This has been my experience too (3, Interesting)

SCHecklerX (229973) | more than 7 years ago | (#18526883)

Linux under VMWare's network performance is pretty bad. An interesting visual confirmation is to use an ssh shell and watch the lag. That may just be the broadcom chips in the servers the company I was working for used, though. Guest OSes are fine for some low traffic stuff that only a few people will be using, and is definitely the way to go in the test lab; but I wouldn't use this configuration as a company's primary reverse proxy or mail solution.

That said,
I use a windows vmware session under linux for those times I have no choice, and it works just fine network-wise as a workstation.

Re:This has been my experience too (1, Informative)

Anonymous Coward | more than 7 years ago | (#18526951)

If eth0 is shared between host and guest OS and host OS is Linux:

# ethtool -K eth0 tso off

Re:This has been my experience too (1)

div_2n (525075) | more than 7 years ago | (#18527059)

I don't see this at all running 13 VMs. But then again, I've got 6 Gigabit NICS load balanced on a Gigabit backplane with the VMs all running on an independent SAS array on a quad processor hyperthreaded box with 32 GB of RAM. But perhaps your box has equally as good specs, I don't know.

Re:This has been my experience too (2, Funny)

Thundersnatch (671481) | more than 7 years ago | (#18527291)

But then again, I've got 6 Gigabit NICS load balanced on a Gigabit backplane with the VMs all running on an independent SAS array on a quad processor hyperthreaded box with 32 GB of RAM. But perhaps your box has equally as good specs, I don't know.

Oh yeah? Well my Johnson is longer than yours, and my son can beat up your son.

Re:This has been my experience too (1)

leuk_he (194174) | more than 7 years ago | (#18527197)

Linux under VMWare's network performance is pretty bad.

Do you want them to test vista performance?

Re:This has been my experience too (1)

phlegmgem (633566) | more than 7 years ago | (#18527667)

...and is definitely the way to go in the test lab... If you are trying to tweak performance, having shared resources can really confuse matters. While trying to optimize a SQL statment on a shared host, I've shot myself in the foot thinking that I was making things better. In reality, performance suffered more due to unrelated system load than from my query stategy. If you have control over all apps that are loading up your physical server, then it shouldn't be a problem. If you're on a publicly shared VPS, then be wary.

Is this news to anyone? (0)

Anonymous Coward | more than 7 years ago | (#18526887)

And with quad core processors already here, do we care? There are major advantages in virtulizing a web application server, like being able to copy your disk image from staging to $n production servers for load balancing.

Sounds about right (2, Informative)

Anonymous Coward | more than 7 years ago | (#18526889)

My first attempt at virtualization was last September with VMWare Server. During testing everything seemed fine. When everything was using it, performance was awful. Everything crawled. I ended up doing an all-nighter to move everything back to a regular server. Note, I wasn't overloading things. There was only one VM on the host. The memory was fixed, not paged to a disk like it is by default. The hard drive was preallocated. My intention for virtualization was to make things easier to manage.

That's when I started experimenting with Xen. This time I put the test under a very high load, and it seemed to handle everything well. I deployed it in October and so far there hasn't been a single performance issue.

I'm now totally addicted to Xen. I create Vms all the time, have split up services into different VMs (ie, when cups crashes it no longer takes out the copy of samba that handles logins, damn I hate cups). So far, no performance issues at all.

single data point is correct (2, Insightful)

Visaris (553352) | more than 7 years ago | (#18526923)

Dell Poweredge SC1420 with dual Xeon 2.8GHz processors

While I can't seem to find all the information on the SC1420, it appears as though this product uses processors from the Prescott generation of Intel CPUs. Some chips from this group support "Vanderpool", Intel's hardware virtualization solution, but not all do. The presence or absence of this feature could greatly impact the performance penalty faced by operating a virtualized computing environment. Further, Intel's new Core2 based CPUs feature a hardware virtualization implementation which may have vastly different performance characteristics. AMD's K8 family supports hardware virtualization as well. I'm excited about their new line of CPUs based on the K10 (Barcelona) core, which feature "NestedPageTables," which are supposed to greatly reduce overhead by doing memory translations in hardware instead of in software by the hypervisor.

All I'm really trying to say is that this article really is only a single data point. I wouldn't let their results influence your overall view of virtualization in any way...

Re:single data point is correct (1)

crunchy_one (1047426) | more than 7 years ago | (#18527029)

Absolutely correct. The presence of virtualization hardware, specifically how well it handles address translation, is key to virtual machine performance. Intel and AMD have just begun on this path, so I would not expect to see near-native performance out of their virtualization hardware for at least the next two or three iterations. IBM followed this path with interpretive execution in the 80's and 90's. It took several iterations of the hardware/software combination before interpretive execution and VM/XA delivered near-native performance.

Re:single data point is correct (1)

crunchy_one (1047426) | more than 7 years ago | (#18527183)

While I'm blithering away on this topic, I'd like to point out that the x86 architecture makes no attempt to specify how I/O devices access memory. This is a huge open problem for virtualization performance and security. To be secure a virtual machine monitor must simulate all I/O; otherwise, an attacker in one virtual machine can access any memory belonging to another virtual machine through I/O direct memory access. Ignoring the security issue, the lack of a uniform architecture for I/O memory access makes virtualization difficult, leading to sub-optimal solutions such as having all I/O performed by one authorized virtual machine.

Re:single data point is correct (4, Informative)

TheRaven64 (641858) | more than 7 years ago | (#18527247)

The biggest overhead from most forms of virtualisation is from emulated devices. If you have loads of money, you can give it to IBM and get some hardware with virtualisation-aware network and block device controllers. Then you get good performance. Alternatively, you can use paravirtualised device drivers. Xen supports this by default, and I think KVM does now for networks. Not sure about VMWare.

With paravirtualised devices, or devices that are virtualisation-aware, a VM can be within 10% of the performance of a real machine quite easily. Without I'm surprised they even got to 57% of native performance for web applications.

Re:single data point is correct (2, Informative)

Natales (182136) | more than 7 years ago | (#18527601)

VMware's vmxnet driver is paravirtualized and it does provide better performance than the traditional pcnet32 virtual device driver, which operates 100% on software to maintain compatibility with other OSs.

Regarding paravirtualization, it's already known that the new VMware Workstation 6 (currently in beta) and presumably the next version of VMware Server, will support VMware's version of paravirtualization called VMI, which was officially accepted as part of the stock Linux kernel starting on 2.6.21. This may help boosting the performance of Linux-based VMs significantly, and unlike the Xen version, it will boot a single kernel image, regardless of the physical or virtual underlying hardware platform.

Pointless test? (3, Insightful)

geoff lane (93738) | more than 7 years ago | (#18526953)

Come on! You run virtualised web servers because 99.9% of all web servers are idle at any given time. So you put 100 on a server. The customer doesn't see any worse performance with their 3 hits a week page and the ISP makes more money/server.

Re:Pointless test? (1)

Software (179033) | more than 7 years ago | (#18527163)

Did you know that one installation of Apache can serve multiple web sites [apache.org] ? IIS can do the same. Using 100 guest OSes running on a server to support 100 web sites is insane.

Re:Pointless test? (1)

TheRaven64 (641858) | more than 7 years ago | (#18527299)

Virtualisation gives you one major advantage over this; isolation. I have a single isntance of Lighttpd serving a few web sites, but if I chose to be malicious then I could easily put something in one that would seriously degrade the others. If each site had a separate server, then this would not be possible. Each site could be owned by a different person / organisation, and completely isolated from the others.

Re:Pointless test? (0)

Anonymous Coward | more than 7 years ago | (#18527329)

Did you know that one installation of Apache can serve multiple web sites? IIS can do the same. Using 100 guest OSes running on a server to support 100 web sites is insane.

Unless the customer really, really wants to have their own OS image with complete control of everything including root access.

Re:Pointless test? (1)

Bert64 (520050) | more than 7 years ago | (#18527347)

Some customers don't like the idea of doing that...
Also, Apache normally runs all the sites as the same user, which is terrible from a security perspective. There are alternatives here, but all have their downsides.

Re:Pointless test? (1)

fatphil (181876) | more than 7 years ago | (#18527637)

An Apache, just do:
        AssignUserID user-id group-id
in each virtual host definition.

It really couldn't be simpler.

Re:Pointless test? (3, Informative)

GiMP (10923) | more than 7 years ago | (#18527697)

AssignUserId only works with the perchild MPM, which has the following caveat: "This module is not functional. Development of this module is not complete and is not currently active. Do not use perchild unless you are a programmer willing to help fix it."

Thus, AssignUserId should NOT be used. SuExec can be used, of course, but that has its own limitations.

Personally, I give users their own Apache processes on their own port (>1024) and use a reverse proxy. I make a living on it.

Re:Pointless test? (3, Informative)

LurkerXXX (667952) | more than 7 years ago | (#18527395)

No it's not insane. Lots of customers want full root access on their systems so they can install whatever they want (different database or other servers, or even alternate OS's). Virtualization is the only way to go for that.

Re:Pointless test? (1)

jafiwam (310805) | more than 7 years ago | (#18527403)

True for "mom and pop" type and "I need a web site for my dog or clan" sites.

Once you start in the market of "32 year old hotshot ASP/PERL program guy that never ran is own server" territory (medium to small businesses have these all over) who has no oversight from his management team... you tend to get people who accidentally kill the entire OS by doing stupid shit in their web sites. "Let's just build an app that sends email every time a file gets viewed! Then I can build my log in Excel and see all my hits real time!" (Never mind that there are logs available to download real time.) Of course, this tool doesn't realize that that crappy DreamWeaver menu he built contains 125 images which also count as hits. Down goes the server and down (until the web server stops) goes the mail server.

People who host on other servers have no respect for their integrity even if they know how. Like a public bathroom, who cares if the toilet is plugged, just shit on top and walk away.

If you want to host with those people and not go out of businesses, you gotta virtualize to protect the rest of your customers from the overblown wannabe leet geeks. The, "knows enough to be dangerous" crowd. Either that, or you need a TOS so strict they end up going elsewhere (or self-hosting).

So, yeah, IIS and Apache and whatever can host many web sites with one instance. Sometimes though, you can't realistically do that.

Re:Pointless test? (1)

petermgreen (876956) | more than 7 years ago | (#18527423)

Did you know that one installation of Apache can serve multiple web sites? IIS can do the same.
indeed it can

now add on the fact that you need mail for each domain stored in a different place (generaly people want e-mail on thier websites domain).

now add on the fact that you have to be very carefull to stop active content doing nasty things to others sites.

then you have the issue that not everything about apache can be controlled through htacess files

it gets even more complex if you have multiple users per site.

now there are frontends that try and deal with theese admin problems for you (cpanel, webappliance etc) but they have issues of thier own (they tend to get in the way if you want to have services other than those thier designers thought of).

Using 100 guest OSes running on a server to support 100 web sites is insane.
depends on the situation. If all sites are run by one admin team then sure just configure the stuff you need to handle multiple domains. If every domain just needs simple webhosting with basic script requirements then sure go ahead and use cpanel or webappliance or similar. If each domain has some scripts with strange requirements or requires other low load services in addition to web and mail or you want complete independence in management then virtual machines start to shine.

Re:Pointless test? (3, Insightful)

Albanach (527650) | more than 7 years ago | (#18527433)

It's not insane if people want different solutions or even want their own server. With virtualisation, a host can offer multiple php versions. You can avoid all the security problems where one script running as the webserver can read any other file accessible to the web server.

You can also get better management control of resources, preventing one site from eating up all available resources on the box.

That's not to say there aren't a million good reasons to use virtual servers in apache, just to point out that virtualising web hosts is not, by definition, a daft idea.

Re:Pointless test? (1)

afidel (530433) | more than 7 years ago | (#18527481)

Except you can offer better security and more flexibility with virtual servers than traditional shared hosting. For a site with no DB or custom native code then shared instances is fine, for anything requiring code running on the server virtual servers are better. The cool thing about ESX is if your servers are all running the same OS and version of their apps the static memory contents all get laid onto the same memory pages, meaning that increased ram usage for 20 servers vs 10 is very low.

Re:Pointless test? (1)

jimicus (737525) | more than 7 years ago | (#18527293)

True in some businesses.

But if you're speccing up a web application that you can be fairly certain will be used by hundreds of people simultaneously, then it's useful to know.

Of course, if you're speccing up the system that this web app is going to run under and you don't test performance before you go live, you'll come unstuck sooner or later anyhow.

Hidden advertisment (1)

quigonn (80360) | more than 7 years ago | (#18526975)

This smells like a hidden advertisment for "Web Performance Inc.". Now somebody please tell me why I should trust the results produced by a relatively unknown product and company, and not stick to proven tools like Borland SilkPerformer [borland.com] or Mercury Loadrunner [mercury.com] .

Bad data, bad setup (5, Insightful)

duncanFrance (140184) | more than 7 years ago | (#18526989)

There's quite a lot wrong with their setup.

1) As others have pointed out, they should be running on ESX to get best performance.
2) Physical machine was a dual-proc. How many processors did they assign to the VM?
3) Physical machine had 2GB memory. They assigned 2GB to the VM!! Vmware will take 256MB of this
for itself, so that 2GB visible to Windows will be being swapped.
4) How many disks did the physical machine have, and what was on them?
If e.g. the physical machine had two disks, the VM should have been given two disk files, with each file being placed on a different physical spindle.

You get the picture.

Re:Bad data, bad setup (1)

suv4x4 (956391) | more than 7 years ago | (#18527677)

There's quite a lot wrong with their setup.

1) As others have pointed out, they should be running on ESX to get best performance.
2) Physical machine was a dual-proc. How many processors did they assign to the VM?
3) Physical machine had 2GB memory. They assigned 2GB to the VM!! Vmware will take 256MB of this
for itself, so that 2GB visible to Windows will be being swapped.
4) How many disks did the physical machine have, and what was on them?
If e.g. the physical machine had two disks, the VM should have been given two disk files, with each file being placed on a different physical spindle.

You get the picture.


And still 43% performance drop-off is quite good for the kind of benefits virtualization gives. People pay way heftier performance penalties by using slow language (PHP), frameworks (Ruby On Rails) and plain coding poorly and not caching what is cacheable.

Re:Bad data, bad setup (1)

linuxgurugamer (917289) | more than 7 years ago | (#18527743)

Ummm, you didn't read the article. They specifically said that by assigning 2 gig to the VM that the linux based system actually had more memory.

Re:Bad data, bad setup (1)

duncanFrance (140184) | more than 7 years ago | (#18527871)

I did read the article:

"The virtualized server has the same memory available to it (2G) as the native server (which implies that the physical machine running VMware has more memory)."

It might "imply" it, but they fail to tell us how much memory the physical machine actually did have. Or whether VMware was set up to assign the memory in one block. Coz if not, yes, you guessed it, it's swap time again.

well duh (1)

Mantaman (948891) | more than 7 years ago | (#18527043)

Do i need to say more? We all know running any VM app is always going to be slower than a real OS on real Hardware. Ok mabe I didnt realise how much slowdown their was but there are easier and cheaper ways to run web servers. Nice to have given it a go and gathered stats but nothing new.

Re:well duh (1)

Sobrique (543255) | more than 7 years ago | (#18527271)

Depends on your objective. If you want something that runs the fastest it possibly can, yes you're probably right. Don't use VMware. Regardless of whether it's web, database or ... well whatever, VMWare is an overhead.

In a perfect world, as a sysadmin I create a nice stable system, and it works really well. Maybe I run lots of apache instances on a single machine, and they co-exist happily.

But this isn't an ideal world. What VMWare does, is fundamentally to workaround the fundamental bogosity of Windows. It lets you create an environment that's trivial to replicate, hardware abstracted, and doesn't trash the rest of the machine around it, when 'someone' inevitably screws up. Someone does something stupid with your single instance apache server, and it goes splat as they fill up /, or get a broken CGI on there, or something. Same things happen to a VM, but then it's dead easy to fix.

There's a reason that almost all the hardcore vendors are 'virtualising'. It's because despite the performance overhead, the bonus you gain on managability, scalability, reliablity far outweighs the cost of having to buy a few more boxes. When you're talking about large sums of money riding on 'outage' this is a no brainer.

I'd still run websites on vmware, because then I can give users the power to do what they like, without screwing up my overall service. That's a good trade in my book.

mMfod up (-1, Troll)

Anonymous Coward | more than 7 years ago | (#18527053)

kill Myself like [goat.cx]

data, datum, data (1)

frequnkn (632597) | more than 7 years ago | (#18527119)

I know that the term 'data' is generally used as a singular in informal speech, but it still drives me nuts. I bet there are a lot of other current or former Latin club members that howl at this literary fingernails-on-chalkboard usage of the term. Not that I stayed after school for Latin club meetings, or to play Civ, or D&D...

Silently weeps into his tattered copy of Remedia Amoris

-Foo

"Duh!" moment (4, Insightful)

Thumper_SVX (239525) | more than 7 years ago | (#18527215)

I agree with many of the commentators here that this is pretty obvious. We use virtualization a lot, but also realize its limitations. For example, we don't run SQL or anything heavily transaction or I/O bound. CPU utilization is usually not a problem; virtual machines perform as well as their physical counterparts in most instances unless you have a lot of CPU intensive virtual machines running.

Web servers are mostly memory and CPU bound which would give one the impression that they would be great candidates for virtualization. However, VMWare Server is not the solution; network I/O is not good on Server. Typically your results would be maybe 75% of the actual physical speed on a "passthrough", less on a NAT. It depends a lot on how your network is set up, not to mention the abilities of the physical machine.

The best solution is Virtual Infrastructure (used to be ESX). That product tackles most of the failings of VMWare server and fixes them. The only exception is that I still wouldn't run anything I/O heavy on VI. SQL's a no-no. Also, if you're not getting the performance from a single web server that you expect, you can easily throw up more web servers. Now, obviously you might get into M$ licensing issues, but that's why you run your web services on Apache :D

Use Virtuozzo (1)

pyite69 (463042) | more than 7 years ago | (#18527239)

Of course VMware and Xen are going to be slow - that is the tradeoff you get when you want the ability to run both Windows and Linux at the same time.

http://openvz.org/ [openvz.org] - it does a much better job of virtualizing IMO. The only minus is that all VM's have to use the same kernel version.

Re:Use Virtuozzo (1)

DaemonTW (733739) | more than 7 years ago | (#18527453)

This is what most hosting companies who offer VPS's do, as Virtuozzo is one of the only virtualisation packages that doesn't suffer from massive performance hits. Granted this is more about what level things are virtualised at than better coding but the performance difference is very significant.

I've seen servers running over a hundred virtual servers (mostly all low usage) without any problems, something that is completely out of the question with VMWare (even ESX).

I'm sure there will be many who will point out that it has less isolation and therefore less secure, however so far I haven't heard of any compromises.

Virtuozzo Blows (0)

Anonymous Coward | more than 7 years ago | (#18527587)

Granted, I've stayed as far away from it as I can, but can customers finally use yum without sick workarounds, or is it still effectively restricted to using vzyum on the hardware node itself, rather than the VE?

I'm sorry to say, but XEN kicks Virtuozzo's ass in terms of usability and stability.

First I thought this wasn't news... (-1, Flamebait)

Anonymous Coward | more than 7 years ago | (#18527253)

But then I'd read "ASP" and "IIS" and, finally, deleted my Slashdot bookmark.

How does the webapp scale? (1)

l0b0 (803611) | more than 7 years ago | (#18527261)

Unless it scales linearly with the number of users, that is a pretty useless metric for the performance of the virtualization system (No, I didn't RTFA).

Obviated Comment (1)

ursuspacificus (769889) | more than 7 years ago | (#18527263)

I have decided to refrain from commenting on this article, as I do not wish to be tarred with the epithet "me-too-er".

Having said that, "Me, Too!"

Fast Virtualization: Xen, KVM, Virtuozzo, GSX, ESX (3, Insightful)

dvdan (1081487) | more than 7 years ago | (#18527353)

For speed, the newer virtualization tools KVM, Xen, and Virtuozzo are presently substantially ahead of the present incarnation of VMWare. KVM requires the new "hardware virtualization" CPU's from Intel and AMD which must be mentioned here, since they represent a major industry recognition of the value of virtualization. This article seems to be giving people the impression that performance of VMWare Server is indicative of virtualization tools in general, and that all virtualization tools slow down hosted virtual machines dramatically. This is simply false. I know hosting providers running 50 virtual servers on a single dual CPU box with thousands and thousands of users, which would simply not work if all virtualization tools had a 43% hit per instance. Another key matter here is that the author fails to mention (or realize?) VMWare Server is crippleware. VMWare states explicitly not to use VMWare Server for anything other than testing because it does not have the performance or feature set of their full blown ESX and GSX servers. Also, while VMWare may be the oldest and arguably most mature virtualization suite, it is certainly not the fastest.

People still use IIS? (1)

DragonTHC (208439) | more than 7 years ago | (#18527387)

who are these people? why are they not being publicly flogged?

Quantifying (1)

Vexorian (959249) | more than 7 years ago | (#18527393)

I think it was pretty obvious that it would add overhead and therefore drop performance. This study is good for quantifying how much of a performance threat it is. And 43% is not an incredibly bad value in my opinion, it is not even 50% ...

These results are pretty much as expected (2, Informative)

Sangui5 (12317) | more than 7 years ago | (#18527413)

It isn't surprising that VMWare would be bad at a web-app workload. See the original paper on Xen:

http://www.cl.cam.ac.uk/research/srg/netos/papers/ 2003-xensosp.pdf [cam.ac.uk]

Top of page 9 has a chart comparing native Linux, Xen, VMWare, and UML for different workloads. They show VMWare degrading performance by over 70% for SPECWEB 99.

Web applications are OS intensive; while VMWare is quite good at pure CPU-bound tasks, it has to perform a lot of emulation whenever you are running inside the OS. So it will stink at anything with lots of small IO, lots of metadata operations, or lots of process creation/switching. For example, VMWare shows a whopping 90% slowdown for OLTP database workloads, according to the Xen paper, and it really isn't surprising. The OS microbenchmarks in the above paper (page 10) show that VMWare has abysmal performance for things like fork(), exec(), mmap(), page faults, and context switches.

Basically, Xen doesn't have to emulate the OS, because they make modifications to the OS. VMWare does dynamic binary rewriting (think fancy emulation) to run an unmodified OS; they therefore pay through the nose in performance overhead for OS-intensive workloads.

"Typical Application" they for sure ment 42% (1)

drolli (522659) | more than 7 years ago | (#18527435)

What what the typical application. Was it well writen or not. what was the reason for the slowdown (Memory, Network, ?). Without that 43% is just a munber as good as 42%, the answer to all performance loss questions

Xen Scales (1)

ndverdo (799508) | more than 7 years ago | (#18527485)

The CPU-overhead on the Xen hypervisor is much much lower - between 2-4%. http IO has been less explored. There have been workshops on its use in HPC http://xhpc.wu-wien.ac.at/ [wu-wien.ac.at] http://xhpc.ai.wu-wien.ac.at/ [wu-wien.ac.at]

That's easy to fix! (1)

JoeD (12073) | more than 7 years ago | (#18527505)

Since a virtual server only gets 57% of the performance of a physical server, just run TWO virtual servers!

That way, you'll get 114% throughput!

Next week, I solve world hunger, global warming, and bring peace to the Middle East.

Would be interesting if done right (1)

ceeam (39911) | more than 7 years ago | (#18527543)

1. They fucked up their setup. They assigned 2Gigs to VM and all host has is 2Gigs too? Brilliant.
2. Since when are you allowed to post benchmarks of MS software?

VM performance comparison (1)

chargrilled (468628) | more than 7 years ago | (#18527733)

Has anyone done a recent true apples to apples comparison with Vmware, Virtual Server, and maybe Sen on the same exact hardware running the same guest VMs? Obviously it would have to have Windows as the host OS due to Virtual Server being Windows only but I would like to see how the various solutions stack up in different scenarios. Such as IO heavy, CPU heavy, sheer number of VMs that can be hosted, etc.
---
You can use any kind of HTML formatting that Slashdot accepts.
Generated by SlashdotRndSig [snop.com] via GreaseMonkey [mozdev.org]

really bad report... (1, Informative)

Anonymous Coward | more than 7 years ago | (#18527815)

But it highlights one thing: if you hand virtualization to clueless people, you'll get bad perfs.

It also shows, both in the article and in the comments here, the severe misunderstanding surrounding the concept of "virtualization".

I see lots of clueless people saying "uh, of course, virtualization perfs sucks". I think those people don't realize today's virtualization technology ain't grandpa's past-century emulators.

There are today virtualization technologies that offer basically native speeds. Xen can now run in two modes (para-virt or hardware-virt, the latter if the MOBO/BIOS/CPU supports Intel-VT / AMD-V)... In paravirt mode Xen offers native speeds (the overhead is so small you'll have a hard-time measuring it). Better: network I/O ain't good enough for you? Simply "passthrough" a PCI device (say a PCI network card) to your paravirtualized guest. The guest (and only the guest) is directly accessing the PCI card (no more network I/O problems). But you can't run Windows on Linux using paravirt under Xen...

In hardware-virtualized mode, under Xen (or KVM, which only does hardware-virt), you can run Windows. Network and disk I/O, for hardware virt, at this point sucks. However you can install special drivers in your guest to make it speedier (drivers for Windows under Xen are $$$ and under development for KVM).

But, wait, there's more to come... Next gen IOMMU is around the corner. And as soon as it gets implemented in Xen, the already super-fast virtualized system gets an additional boost and you'll have something even closer to native, even when running Windows under Xen.

If you think "virtualization will always be slower" you need a reality check: the CPU makers are working hard so that the virtualization overhead becomes irrelevant. And suddenly the ones not using virtualization will find themselves with a less capable, less secure, less maintanable box being, in some particular, anecdotical, cases only 0.05% faster.

Virtualization is here to stay and the overhead, already very small today, will keep shrinking.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>