Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Software

Virtualizing Cuts Web App Performance 43% 223

czei writes "This just-released research report, Load Testing a Virtual Web Application, looks at the effects of virtualization on a typical ASP Web application, using VMWare on Linux to host a Windows OS and IIS web server. While virtualizing the server made it easier to manage, the number of users the virtualized Web app could handle dropped by 43%. The article also shows interesting graphs of how hyper-threading affected the performance of IIS." The report urges readers to take this research as a data point. No optimization was done on host or guest OS parameters.
This discussion has been archived. No new comments can be posted.

Virtualizing Cuts Web App Performance 43%

Comments Filter:
  • Virtualize this (Score:3, Insightful)

    by Anonymous Coward on Thursday March 29, 2007 @08:35AM (#18526789)
    That is all very well, but we all KNOW apps slow down when we run them in a VM. What difference does it make to the average n00b who wants to watch funny videos [digg.com] and check their email? Anyone using computers for serious numbercrunching obviously won't virtualize anyway. No big deal
    • Re:Virtualize this (Score:5, Interesting)

      by Fordiman ( 689627 ) <fordiman@g[ ]l.com ['mai' in gap]> on Thursday March 29, 2007 @08:40AM (#18526835) Homepage Journal
      I do like the idea of a variably sized beowulf cluster running a floating number of package (LAMP) servers. Get more clients? Add more VLAMPs. Things slowing down? Add more hardware.

      You still take performance hits, but if you can scale your system by just adding cheap commodity systems, that works. Plug it in, boot it off a CD, and let the Cluster take control.

      • by thegnu ( 557446 ) <thegnu@noSpam.gmail.com> on Thursday March 29, 2007 @09:08AM (#18527071) Journal
        I do like the idea of a variably sized beowulf cluster running a floating number of package (LAMP) servers. Get more clients? Add more VLAMPs. Things slowing down? Add more hardware.

        I started getting aroused as I read your post. This is highly disturbing.
        • A sister of my department here actually does this for its academic hosting. Though, it's a grid, not a cluster.
      • by eno2001 ( 527078 )
        Not to mention the fact that paravirtualization as well as hardware assisted virtualization like Xen offers (and later Longhorn) really cut the performance issues WAY the hell down. With a system like Xen you get very close to bare metal speeds since there is no such thing as a "host OS" to get in the way.
        • Not to mention the fact that paravirtualization as well as hardware assisted virtualization like Xen offers (and later Longhorn) really cut the performance issues WAY the hell down.

          Paravirtualization also requires a free software OS kernel, and IIS-only web applications are do not yet run on any free software kernel. (ReactOS is nowhere near mature enough.) Any virtualization also requires more OS licenses and higher-class, more expensive OS licenses. Or do you claim that all web app developers should drop IIS-only frameworks immediately, and all enterprises that rely on IIS-only web applications should drop their mission-critical IIS-only web applications immediately?

          • by joto ( 134244 )

            Paravirtualization also requires a free software OS kernel

            Uhm, why? It's not like the virtualization program reads the license to the source code, or anything. I think you are confusing the issues here.

            Or do you claim that all web app developers should drop IIS-only frameworks immediately

            Yes, that would be a good idea

            and all enterprises that rely on IIS-only web applications should drop their mission-critical IIS-only web applications immediately?

            Yes, at least as soon as possible.

    • by Alioth ( 221270 )
      Actually, you've got it the wrong way around. The Xen authors did a study, and found that virtualization (either User Mode Linux, VMware or Xen) had very little impact on the performance when number crunching. VMware and UML had a very high performance penalty for I/O heavy workloads - i.e. funny videos and checking email. So yes, it will effect the noob the most if the server they are going to is using virtualization, because virtualization has a high performance impact on the sorts of things they do.

      Parav
    • Do you recognize that there are millions of people using computers for something in between "watching funny videos" and "serious numbercrunching"?

      Check out Amazon EC2 and tell me again who is and isn't going to be using virtualization?
  • Well, (Score:5, Insightful)

    by Fordiman ( 689627 ) <fordiman@g[ ]l.com ['mai' in gap]> on Thursday March 29, 2007 @08:37AM (#18526807) Homepage Journal
    Duh.

    Seriously. I don't know who gave anyone the impression that virtualization was a performance booster. Management improver? Sure. Stability insurance? Why not? But if you don't get that virtualizing your servers imposes a bit of overhead, then you're probably not paying attention.

    I especially love the idea that running different types of server virtualized on the same machine is a good idea; the idea of virtualization of multiple servers is to distribute the load. If you have, say, ftpd, httpd and mysqld running as their own virtualized systems, they will all get hit *simultaneously*.

    Again. Duh.
    • Re:Well, (Score:5, Insightful)

      by Mr. Underbridge ( 666784 ) on Thursday March 29, 2007 @09:17AM (#18527157)

      Seriously. I don't know who gave anyone the impression that virtualization was a performance booster. Management improver? Sure. Stability insurance? Why not? But if you don't get that virtualizing your servers imposes a bit of overhead, then you're probably not paying attention.

      Well, I think the point was that he attached an actual number to the amount of the performance hit, which is relevant. That's called research; quantifying and proving that which seems 'obvious'.

      • Re: (Score:3, Interesting)

        by Fordiman ( 689627 )
        Well put. But I do know a number of people in the industry that will be shocked by this, which was who I was referencing.

        But really. If you've got the money for the extra hardware to maintain performance, I say go for the virtualization, if only to make yout IT guys' lives easier (happy IT is useful IT).
      • Th key thing to not is that their tests don't substantiate their conclusion:
        > These results indicate that a virtualized server running a typical web application may experience
        > a 43% loss of total capacity when compared to a native server running on equivalent hardware.

        This may lead to people believing that virtualiz]ation just isn't worth the advantages. The key problem is that there are several virtualization schemes. Off the top of my head, I can list:
        * Xen
    • Re:Well, (Score:5, Insightful)

      by hey! ( 33014 ) on Thursday March 29, 2007 @09:32AM (#18527311) Homepage Journal
      Well, it's not a surprise, but it's probably worth quantifying.

      Here's a question: what is more available: hardware or skilled system administrators? Obviously hardware.

      Here's a common scenario: you've set up a system to provide some useful package of services. How do you let other people duplicate your success? (1) tell them what hardware they need and (2) have them install and configurethe software on their hardware. Guess which item involves the most cost in the long run?

      The hardware is easy; the greatest barrier and cost is the process of installing and configuring the software. That's one place a virtual machine is worth considering in producation systems. You aren't going to use something like VMWare in one-of-a-kind production systems. You're going to use it when you need to clone the same set up over and over again. This is very attractive for application vendors, who spend huge amounts of support on installation and tracking down compatibilty conflicts.

      Another application would be an IT department that has to support dozens of more or less identical servers, especially if they are frequently called upon to set up new servers. If I had a choice, I'd use Linux virtualization on a midrange or mainframe, but if those servers must be Windows servers, then I'd be looking at some kind of cluster with SAN. This is not really my area of expertise, but we're talking high end boxen for x86; if the typical server didn't need 100% of box, then I have three choices: waste server bandwidth (expensive), force groups to share servers (awkward and inflexible; what if I have to separate two groups joined at the hip?), and virtualization.

      Naturally if you are virtualizing production servers, you need to scale your hardware recommendation up to account for VM overhead.

      What would be very interesting is a study of the bottlenecks. If you are considering a system with certain characteristics (processor/processors, memory, storage/raid etc) and you have X dollars, where is that best spent?
      • Re:Well, (Score:5, Insightful)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday March 29, 2007 @11:02AM (#18528469) Homepage Journal
        To us, the whole point of virtualization is that we have several servers which are mostly idle at all times and completely idle at most times, and for support reasons we're not supposed to be running anything else on the same copy of windows. So we can replace five or six systems with a four-core 1U box with just a few gigabytes of memory, which will recover rack space and reduce power consumption. For anything that's actually heavily loaded, running on the hardware is probably a very good idea.
        • Re: (Score:3, Informative)

          by inKubus ( 199753 )
          It's not just support reasons. A lot of MSFT products require a dedicated server because they use the Default Web Site in IIS ;) Multi-Tenancy is not an option for many even modern server products. So, virtualize the server.
        • That is the primary stated purpose for VMWare. Also, once you are in a VM environment, you can load balance these single ap, low utilization workloads according to their needs. Have a report server that only gets hit heavily once a quarter? Leave it on a slow core for most of the time, and send it to a beefy quad core at end of quarter without having to reboot the machine.

          The other factor is the cost of staying current on hardware- if you have 300 single app servers running at 10% CPU utilization and you w
    • Re: (Score:3, Interesting)

      by jalefkowit ( 101585 )

      if you don't get that virtualizing your servers imposes a bit of overhead, then you're probably not paying attention...

      In fairness, a 43% performance hit is a bit more than "a bit". It's cutting performance nearly in half.

      I agree with your overall sentiment (a virtualized system is going to be slower than the same system running on real metal, by definition) but 43% is certainly a higher figure than I would have expected.

  • Bogus Test (Score:5, Informative)

    by Anonymous Coward on Thursday March 29, 2007 @08:40AM (#18526831)
    Who uses VMWare Server in a production environment anyway? We run all of our Web services, Exchange servers and SQL databases in VMWare's Virtual Infrastructure 3. VMWare Player and Server are only ment for lab evironments and low load applications. VMWare even says as much on their website. Either this is just FUD or the author is an idiot. In other news water is wet.
    • So does VMWare's Virtual Infrastructure 3 perform much better? Or is it just more manageable setup and config wise? Sorry, I'm fooling around with VMWare Server and am a noob on the issue.
      • by tji ( 74570 )
        It uses a thin "hypervisor" layer os the Host OS, rather than a general purpose OS. The hypervisor is built to host virtual clients, so I would expect it to provide better performance than doing it on Linux. But, in any virtualization environment, there will surely be some overhead, and the performance will be lower than the client OS on raw hardware.
      • by nharmon ( 97591 )
        Yes it does. ESX (aka Virtual Infrastructure) the hypervisor (thing that the virtual machines run on) is the base OS rather than an application running on a host OS like with Vmware Server or Workstation/Player. As a result the overhead is extremely low. But I'm not sure that is really relevant. Virtualization is appropriate when your application is not using your iron's full potential. If you virtualize an application that is already maxing our processor/memory, you are not following the published best pra
      • Re:Bogus Test (Score:5, Insightful)

        by sammy baby ( 14909 ) on Thursday March 29, 2007 @09:44AM (#18527451) Journal
        Yes: it performs much, much better.

        VI3 is actually a suite of products. At the heart is VMware ESX Server [vmware.com], which is actually an operating system in its own right: it runs "on the metal," without having Windows or Linux installed already on the system. It also has a service console operating system which looks suspiciously like a *NIX style operating system, so you can SSH directly to the system, cd into your /vmfs directory and, say, scp disk files over the network. If you wanted to.

        However, as a pretty damn safe rule of thumb, no system is going to run faster on equivalent hardware after being virtualized. In a prior job where I was often asked to provide development/test systems, I got phone calls from a lot of people who were bitten hard by the virtualization bug. Whenever someone brought up any issue having to do with infrastructure, no matter how odd or off the wall, they wanted to push virtualization as a solution. I had to explain to them that if your problem is that a web server is slow, the answer isn't to install VMWare server on it, set up two host operating systems, and say, "There! Now I have two web servers." You'd be surprised how pervasive that sort of thinking is, even among people who should patently know better.

        Another useful guideline: various types of services are impacted differently by being virtualized. Generally, the best candidates for virtualization are ones that spend a lot of time idle. This is actually more common than you might think - people need a server set up for something, can't put it on a pre-existing system for security/compatibility reasons, so they go out and buy a new system which is ten times more powerful than they need. You can put a lot of these kinds of systems on a single, reasonably powerful ESX server. On the other hand, systems that heavily tax available hardware, especially I/O, are usually much harder to deal with.
        • At the heart is VMware ESX Server, which is actually an operating system in its own right: it runs "on the metal," without having Windows or Linux installed already on the system. It also has a service console operating system which looks suspiciously like a *NIX style operating system, so you can SSH directly to the system, cd into your /vmfs directory and, say, scp disk files over the network. If you wanted to.

          ESX Server is actually based on Linux - it's a very heavily modified version of RedHat. That's
          • Ha! You know, I suspected that was the case, but when I did a "mount" command it listed all the partitions like I'm accustomed to seeing them in Solaris, so I wasn't sure. Thanks for clearing that up. :)
          • Re:Bogus Test (Score:5, Informative)

            by Bohiti ( 315707 ) on Thursday March 29, 2007 @11:59AM (#18529331) Homepage
            Actually, as it's been explained to me, the ESX hypervisor itself is pure proprietary code (and small, too). The Service Console is very readily admitted to be a tweaked out RHEL (3, I believe..). Linux is used to boot, and then (magically?) transfers control of the bare metal to the hypervisor. Linux then jumps into a virtual machine, although it's not presented like a virtual machine, which creates all this confusion.

            In the end, the tweaked RHEL that you interact with (ssh, scp) is not the hypervisor, but a VM with special tools that can manipulate the hypervisor.
        • However, as a pretty damn safe rule of thumb, no system is going to run faster on equivalent hardware after being virtualized.

          Oddly enough, that's not as true as you'd think. I know that HP and IBM both had projects to virtual hardware (not in exactly VMWare style virtualization, more like a JIT optimization) where software ran faster after being virtualized. By about 20% if I remember correctly. HP's project was Dynamo [arstechnica.com] and IBM's was DAISY [ibm.com].

          Virtualizing a CPU at runtime in a JIT-optimization fashion c

          • Re: (Score:3, Interesting)

            by kscguru ( 551278 )
            Not really - HP and IBM's project get 20% improvements by optimizing slow code - that is, untuned userspace applications. Take a whole system, including a kernel that multitudes of people have spent years tuning (Linux, Windows), server apps that already squeeze in as many tricks as possible (Apache), and the net gains of re-translating instructions diminish as the underlying apps already pull in more of these optimizations. Dynamo and DAISY also gloss over one crucial detail: you need a good-sized cache
        • by Richy_T ( 111409 )
          Don't try and scp disk files directly from the server, it won't work due to the special filesystem vmware ESX server uses (and unfortunately, you can't export to stdout or to a network share so be careful when setting up initial partitions). You have to export it first. You may need to import them if you scp them to it too (I haven't had the need to do that so far).
          • No you don't. I host templates or golden copies of my VMs on a NFS filesystem all the time. If I'm too lazy to set up the proper vmkernel to mount NFS shares directly to ESX, I mount the NFS filesystem to a solaris box and scp them up.
        • I had to explain to them that if your problem is that a web server is slow, the answer isn't to install VMWare server on it, set up two host operating systems, and say, "There! Now I have two web servers."

          Better yet, I'll setup one $50k 4x Quad-Core Xeon system with 8GB RAM, 4 NIC's, etc and load up $20k of VMWare ESX and I can easily run 8 Web server instances and get ALMOST the same performance and availability as 8 separate $5k servers. Maybe this makes sense somewhere out 3-4 years based on mangemen

          • Better yet, I'll setup one $50k 4x Quad-Core Xeon system with 8GB RAM, 4 NIC's, etc and load up $20k of VMWare ESX and I can easily run 8 Web server instances and get ALMOST the same performance and availability as 8 separate $5k servers. Maybe this makes sense somewhere out 3-4 years based on mangement, power and space costs-savings, but I can't see it.

            In this example, while the single big box might have a lower total peak performance if all 8 VMs were maxed at the same time, this may rarely happen all at

        • by inKubus ( 199753 )
          DB Server, no
          Dedicated Time Server, yes
          Web Front End, maybe
    • Re:Bogus Test (Score:5, Informative)

      by Sobrique ( 543255 ) on Thursday March 29, 2007 @09:17AM (#18527161) Homepage
      Actually, the company I worked for 6 months back, one of the projects I was involved in was 'VMWare'. Production stuff running on on the ESX servers (which became 'virtual infrastructure') in our datacentre, as a cost effective scalable environment. Yes, we weren't getting 'uber performance' but then again, we were running 150 or so VMs on an 6 server VMWare farm.

      One of the other things we prototyped and deployed was 'site services packages' - get GSX (now VMWare Server), stick it on a pair of 2U servers, and attach a storage array to both of them. Then create your 'template' fileserver, DHCP server, print server, proxy, that kind of thing and deploy them to this package. It worked very well indeed - you get a whole new order of magnitude on stability (although to be fair that's in part because we through away the crappy workstations that were doing the 'low intensity' stuff) and was extremely managable, and trivially replacable in the event of a hardware failure.

      Performance? No, VMWare isn't that great on performance - whilst it's not bad, in an ideal situation, fundamentally what you are doing is introducing an overhead on your system. And probably contention too. But it's really good at efficient resource utilisation, easy manageability and maintainability.

      As an experienced sysadmin, my reaction is screw performance. Let's start with reliable and scalable, and then performance just naturally follows, as does a really high grade service.

      Proactive laziness is a fundamental of systems admin. Your job, is essentially to put yourself out of a job - or more specificially, free up your time to play with toys. The best way to do this is build something stable, well documented and easily maintainable. Then your day consists of interesting stuff, punctuated by the odd RTFM when something doesn't work quite right.

    • Actually, I've worked for a place that uses VMWare on some of its production servers. They spend really ridiculous amounts of money on a really big server with a bunch of CPUs (one has at least 96), and then use VMWare ESX Server to run multiple virtual servers on the same box. It's actually a good approach, since ESX server uses hypervisor virtualization, which gives you much lower overhead than traditional virtualization while giving finer-grained control over the resources each virtual server gets.

      It's
      • Re: (Score:3, Insightful)

        by afidel ( 530433 )
        Here's [google.com] a little spreadsheet I created to do a cost/benefit analysis for Vmware ESX. There are some assumptions built in, and it's not yet a full ROI calculator, but it gets most of the big costs. Cell A1 is the number of our "standard" systems to be compared (4GB dual cpu 2003 machines). The DL580 is 4xXeon 7120 with 32GB of ram, local RAID1 on 15k disks, dual HBA's and a dual port addon NIC. The DL585 is 2xOpteron 8220HE with 32 or 64GB of ram (the 580 with 64GB was more expensive than buying two with 32GB
        • That's similar to what we found. Note that at 20 servers (10 a piece for 2 of the larger servers), your savings is about 36%. That's huge. One thing I noticed with your spreadsheet, though, is that you don't take into account the extra manpower needed to maintain the hypervisor. We found that we needed one more admin per physical server, and that he/she had to be pretty skilled above and beyond our Windows and UNIX admins (= more $$$) but YMMV.

          • by afidel ( 530433 )
            One admin per physical server?????
            Dude, I admin 130+ physical servers and ~700 users myself, having another admin would be a luxury. I'm planning on going this way to reduce MY overhead, it's a lot easier to roll out a couple new VM's than it is to rack and stack the equivalent number of pizza boxes =) I just have to make the business case. Having an admin per 20 servers would be purely insane, that would make labor one of the most expensive pieces of the IT budget, which just is not the case around here e
    • We are.

      We have hundreds of servers, most of which aren't even coming close to suffering any performance problems due to over utilization, so it it turns out to be more cost effective for us to use virtualization on systems that are running applications that aren't necessarily processor-intensive (which, in reality, makes almost no difference) rather than the cost hit of hardware and maintenance costs on a bunch of servers that are being underused.

      We don't use it for everything, just where it makes sense.

  • by Fuyu ( 107589 ) on Thursday March 29, 2007 @08:41AM (#18526845)
    They performed the test on VMware Server not VMware ESX Server which is what most enterprises will use. VMware ESX Server runs on "bare metal", so it does not have the overhead of the host operating system.
    • Re: (Score:2, Informative)

      by dc29A ( 636871 )
      They performed the test on VMware Server not VMware ESX Server which is what most enterprises will use. VMware ESX Server runs on "bare metal", so it does not have the overhead of the host operating system.

      Doesn't VMWare ESX run on some modified Red Hat version?

      Also, we run ESX in our production environment, when we stress tested a web application running on IIS and with ASP/VB, the ESX machine couldn't give us more than 10 transactions per second (there was one single VM running on ESX). ESX was crawling.

      T
      • Re: (Score:2, Informative)

        by Fuyu ( 107589 )
        Yes VMware ESX Server runs a modified version of Red Hat Linux.

        According to Wikipedia [wikipedia.org], "VMware ESX Server uses a stripped-down proprietary kernel (derived from work done on Stanford University's SimOS [stanford.edu]) that replaces the Linux kernel after hardware initialization. The Service Console (also known as "COS" or as "vmnix") for ESX Server 2.x derives from a modified version of Red Hat Linux 7.2. (The Service Console for ESX Server 3.x derived from a modified version of Red Hat Enterprise Linux 3.) In general, thi
    • Re: (Score:3, Informative)

      by Anonymous Coward
      ESX Server still gives you a base 40% performance hit. I run a ~600 VM farm under VI3 and our performance on Apache fell from 15000 requests/s (mostly static content) to 5000. That was during a load test with one single virtual machine running on the blade. The same load test using IIS went from 13000 to 9000. Also a huge performance hit, although not quite as bad as on Linux. And before anyone says anything, I'm a linux tech and I was somewhat deprssed about the results, to our windows techs great joy.
      • by Quikah ( 14419 ) on Thursday March 29, 2007 @02:03PM (#18531213)
        I think everyone is kind of looking at this the wrong way. Sure you get a performance hit, but yu are testing maximum performance. That is a situation where you wouldn't want to virtualize anyway. If the system is running at 100% utilization, then leave it alone. It is more interesting to take your servers running at 20-30% util (if that, how many idle server do YOU have?), and cram them all into a couple of boxes. You most likely WON'T see a perfromance drop because there was so much headroom on the system already. Virtualization struggles at the max utilization case, but then that is not the case that it is really meant for.
  • by SCHecklerX ( 229973 ) <greg@gksnetworks.com> on Thursday March 29, 2007 @08:45AM (#18526883) Homepage
    Linux under VMWare's network performance is pretty bad. An interesting visual confirmation is to use an ssh shell and watch the lag. That may just be the broadcom chips in the servers the company I was working for used, though. Guest OSes are fine for some low traffic stuff that only a few people will be using, and is definitely the way to go in the test lab; but I wouldn't use this configuration as a company's primary reverse proxy or mail solution.

    That said,
    I use a windows vmware session under linux for those times I have no choice, and it works just fine network-wise as a workstation.
    • by div_2n ( 525075 )
      I don't see this at all running 13 VMs. But then again, I've got 6 Gigabit NICS load balanced on a Gigabit backplane with the VMs all running on an independent SAS array on a quad processor hyperthreaded box with 32 GB of RAM. But perhaps your box has equally as good specs, I don't know.
      • Re: (Score:3, Funny)

        But then again, I've got 6 Gigabit NICS load balanced on a Gigabit backplane with the VMs all running on an independent SAS array on a quad processor hyperthreaded box with 32 GB of RAM. But perhaps your box has equally as good specs, I don't know.

        Oh yeah? Well my Johnson is longer than yours, and my son can beat up your son.

  • Sounds about right (Score:2, Informative)

    by Anonymous Coward
    My first attempt at virtualization was last September with VMWare Server. During testing everything seemed fine. When everything was using it, performance was awful. Everything crawled. I ended up doing an all-nighter to move everything back to a regular server. Note, I wasn't overloading things. There was only one VM on the host. The memory was fixed, not paged to a disk like it is by default. The hard drive was preallocated. My intention for virtualization was to make things easier to manage.

    That
    • by Alioth ( 221270 )
      Yes, Xen rocks. The difference is Xen is a lightweight hypervisor, and VMware Server is heavyweight, more like an emulator. I/O intensive loads in particularly suffer extremely badly under VMware Server (and User Mode Linux), but run almost as fast (i.e indistinguishable) as on the bare metal with Xen.
  • by Visaris ( 553352 ) on Thursday March 29, 2007 @08:48AM (#18526923) Journal
    Dell Poweredge SC1420 with dual Xeon 2.8GHz processors

    While I can't seem to find all the information on the SC1420, it appears as though this product uses processors from the Prescott generation of Intel CPUs. Some chips from this group support "Vanderpool", Intel's hardware virtualization solution, but not all do. The presence or absence of this feature could greatly impact the performance penalty faced by operating a virtualized computing environment. Further, Intel's new Core2 based CPUs feature a hardware virtualization implementation which may have vastly different performance characteristics. AMD's K8 family supports hardware virtualization as well. I'm excited about their new line of CPUs based on the K10 (Barcelona) core, which feature "NestedPageTables," which are supposed to greatly reduce overhead by doing memory translations in hardware instead of in software by the hypervisor.

    All I'm really trying to say is that this article really is only a single data point. I wouldn't let their results influence your overall view of virtualization in any way...
  • Pointless test? (Score:4, Insightful)

    by geoff lane ( 93738 ) on Thursday March 29, 2007 @08:52AM (#18526953)
    Come on! You run virtualised web servers because 99.9% of all web servers are idle at any given time. So you put 100 on a server. The customer doesn't see any worse performance with their 3 hits a week page and the ISP makes more money/server.

    • Did you know that one installation of Apache can serve multiple web sites [apache.org]? IIS can do the same. Using 100 guest OSes running on a server to support 100 web sites is insane.
      • Virtualisation gives you one major advantage over this; isolation. I have a single isntance of Lighttpd serving a few web sites, but if I chose to be malicious then I could easily put something in one that would seriously degrade the others. If each site had a separate server, then this would not be possible. Each site could be owned by a different person / organisation, and completely isolated from the others.
      • by Bert64 ( 520050 )
        Some customers don't like the idea of doing that...
        Also, Apache normally runs all the sites as the same user, which is terrible from a security perspective. There are alternatives here, but all have their downsides.
      • Re:Pointless test? (Score:4, Informative)

        by LurkerXXX ( 667952 ) on Thursday March 29, 2007 @09:39AM (#18527395)
        No it's not insane. Lots of customers want full root access on their systems so they can install whatever they want (different database or other servers, or even alternate OS's). Virtualization is the only way to go for that.
      • by jafiwam ( 310805 )
        True for "mom and pop" type and "I need a web site for my dog or clan" sites.

        Once you start in the market of "32 year old hotshot ASP/PERL program guy that never ran is own server" territory (medium to small businesses have these all over) who has no oversight from his management team... you tend to get people who accidentally kill the entire OS by doing stupid shit in their web sites. "Let's just build an app that sends email every time a file gets viewed! Then I can build my log in Excel and see all my
      • Did you know that one installation of Apache can serve multiple web sites? IIS can do the same.
        indeed it can

        now add on the fact that you need mail for each domain stored in a different place (generaly people want e-mail on thier websites domain).

        now add on the fact that you have to be very carefull to stop active content doing nasty things to others sites.

        then you have the issue that not everything about apache can be controlled through htacess files

        it gets even more complex if you have multiple users per s
      • Re:Pointless test? (Score:4, Insightful)

        by Albanach ( 527650 ) on Thursday March 29, 2007 @09:42AM (#18527433) Homepage
        It's not insane if people want different solutions or even want their own server. With virtualisation, a host can offer multiple php versions. You can avoid all the security problems where one script running as the webserver can read any other file accessible to the web server.

        You can also get better management control of resources, preventing one site from eating up all available resources on the box.

        That's not to say there aren't a million good reasons to use virtual servers in apache, just to point out that virtualising web hosts is not, by definition, a daft idea.
      • by afidel ( 530433 )
        Except you can offer better security and more flexibility with virtual servers than traditional shared hosting. For a site with no DB or custom native code then shared instances is fine, for anything requiring code running on the server virtual servers are better. The cool thing about ESX is if your servers are all running the same OS and version of their apps the static memory contents all get laid onto the same memory pages, meaning that increased ram usage for 20 servers vs 10 is very low.
    • by jimicus ( 737525 )
      True in some businesses.

      But if you're speccing up a web application that you can be fairly certain will be used by hundreds of people simultaneously, then it's useful to know.

      Of course, if you're speccing up the system that this web app is going to run under and you don't test performance before you go live, you'll come unstuck sooner or later anyhow.
    • Re:Pointless test? (Score:4, Informative)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Thursday March 29, 2007 @11:03AM (#18528473) Homepage Journal

      You run virtualised web servers because 99.9% of all web servers are idle at any given time. So you put 100 on a server.

      If you have a real need to run 100 separate Apache instances, then you'll want something much higher-level than VMWare. For us, that would be a FreeBSD jail, where each instance would get its own chrooted home directory and IP address. That way, you're not allocating resources to 100 little-used OS images; each shares from the same memory and hard drive pool. Jails are slightly limited in that I'd like a way to limit CPU and memory allocation, but in practical application this really works very well today.

  • This smells like a hidden advertisment for "Web Performance Inc.". Now somebody please tell me why I should trust the results produced by a relatively unknown product and company, and not stick to proven tools like Borland SilkPerformer [borland.com] or Mercury Loadrunner [mercury.com].
    • by Monoman ( 8745 )
      That and maybe a little bit of badmouthing the free VMWare Server product to drive people to use the MS Virtual Server?

      All and all nothing great to read unless you are new to virtualization and some of the basics explained to you.

      Virtualization is like any other tool, it isn't the always the answer but when appropriate it is a good thing.
  • by duncanFrance ( 140184 ) on Thursday March 29, 2007 @08:56AM (#18526989)
    There's quite a lot wrong with their setup.

    1) As others have pointed out, they should be running on ESX to get best performance.
    2) Physical machine was a dual-proc. How many processors did they assign to the VM?
    3) Physical machine had 2GB memory. They assigned 2GB to the VM!! Vmware will take 256MB of this
    for itself, so that 2GB visible to Windows will be being swapped.
    4) How many disks did the physical machine have, and what was on them?
    If e.g. the physical machine had two disks, the VM should have been given two disk files, with each file being placed on a different physical spindle.

    You get the picture.
    • by suv4x4 ( 956391 )
      There's quite a lot wrong with their setup.

      1) As others have pointed out, they should be running on ESX to get best performance.
      2) Physical machine was a dual-proc. How many processors did they assign to the VM?
      3) Physical machine had 2GB memory. They assigned 2GB to the VM!! Vmware will take 256MB of this
      for itself, so that 2GB visible to Windows will be being swapped.
      4) How many disks did the physical machine have, and what was on them?
      If e.g. the physical machine had two disks, the VM should have been gi
    • by j-cloth ( 862412 )
      2) Physical machine was a dual-proc. How many processors did they assign to the VM?
      Great points, but for this one I must answer the rhetorical question for clarity. The answer should be 1. VM Guests will perform very poorly when given 2 vCPUs on a 2 CPU host because (simply) the guest will not be given access to any CPU until both are available. This is something VMWare says all over the place but it's tough to convince sysadmins who are coming from the physical world that more CPU != faster.
  • "Duh!" moment (Score:5, Insightful)

    by Thumper_SVX ( 239525 ) on Thursday March 29, 2007 @09:21AM (#18527215) Homepage
    I agree with many of the commentators here that this is pretty obvious. We use virtualization a lot, but also realize its limitations. For example, we don't run SQL or anything heavily transaction or I/O bound. CPU utilization is usually not a problem; virtual machines perform as well as their physical counterparts in most instances unless you have a lot of CPU intensive virtual machines running.

    Web servers are mostly memory and CPU bound which would give one the impression that they would be great candidates for virtualization. However, VMWare Server is not the solution; network I/O is not good on Server. Typically your results would be maybe 75% of the actual physical speed on a "passthrough", less on a NAT. It depends a lot on how your network is set up, not to mention the abilities of the physical machine.

    The best solution is Virtual Infrastructure (used to be ESX). That product tackles most of the failings of VMWare server and fixes them. The only exception is that I still wouldn't run anything I/O heavy on VI. SQL's a no-no. Also, if you're not getting the performance from a single web server that you expect, you can easily throw up more web servers. Now, obviously you might get into M$ licensing issues, but that's why you run your web services on Apache :D
  • Of course VMware and Xen are going to be slow - that is the tradeoff you get when you want the ability to run both Windows and Linux at the same time.

    http://openvz.org/ [openvz.org] - it does a much better job of virtualizing IMO. The only minus is that all VM's have to use the same kernel version.

  • by dvdan ( 1081487 ) on Thursday March 29, 2007 @09:36AM (#18527353)
    For speed, the newer virtualization tools KVM, Xen, and Virtuozzo are presently substantially ahead of the present incarnation of VMWare. KVM requires the new "hardware virtualization" CPU's from Intel and AMD which must be mentioned here, since they represent a major industry recognition of the value of virtualization. This article seems to be giving people the impression that performance of VMWare Server is indicative of virtualization tools in general, and that all virtualization tools slow down hosted virtual machines dramatically. This is simply false. I know hosting providers running 50 virtual servers on a single dual CPU box with thousands and thousands of users, which would simply not work if all virtualization tools had a 43% hit per instance. Another key matter here is that the author fails to mention (or realize?) VMWare Server is crippleware. VMWare states explicitly not to use VMWare Server for anything other than testing because it does not have the performance or feature set of their full blown ESX and GSX servers. Also, while VMWare may be the oldest and arguably most mature virtualization suite, it is certainly not the fastest.
    • I would beg to differ.
      We've done testing with many tools, and VMware ESX is the fastest true virtualization suite that we've tested.
      First off, Virtuozzo isn't a real Virtual Machine hypervisor at all, it's a way to jail applications in a Windows environment so they don't interfere with each other; it doesn't create all-out VMs like the others.
      Second, your view on "hardware virtualization" assist is flawed - Intel VT and AMD-V (which are the two virtual assist features out there) both simply make it *easier*
  • I think it was pretty obvious that it would add overhead and therefore drop performance. This study is good for quantifying how much of a performance threat it is. And 43% is not an incredibly bad value in my opinion, it is not even 50% ...
  • by Sangui5 ( 12317 ) on Thursday March 29, 2007 @09:41AM (#18527413)
    It isn't surprising that VMWare would be bad at a web-app workload. See the original paper on Xen:

    http://www.cl.cam.ac.uk/research/srg/netos/papers/ 2003-xensosp.pdf [cam.ac.uk]

    Top of page 9 has a chart comparing native Linux, Xen, VMWare, and UML for different workloads. They show VMWare degrading performance by over 70% for SPECWEB 99.

    Web applications are OS intensive; while VMWare is quite good at pure CPU-bound tasks, it has to perform a lot of emulation whenever you are running inside the OS. So it will stink at anything with lots of small IO, lots of metadata operations, or lots of process creation/switching. For example, VMWare shows a whopping 90% slowdown for OLTP database workloads, according to the Xen paper, and it really isn't surprising. The OS microbenchmarks in the above paper (page 10) show that VMWare has abysmal performance for things like fork(), exec(), mmap(), page faults, and context switches.

    Basically, Xen doesn't have to emulate the OS, because they make modifications to the OS. VMWare does dynamic binary rewriting (think fancy emulation) to run an unmodified OS; they therefore pay through the nose in performance overhead for OS-intensive workloads.
  • Since a virtual server only gets 57% of the performance of a physical server, just run TWO virtual servers!

    That way, you'll get 114% throughput!

    Next week, I solve world hunger, global warming, and bring peace to the Middle East.
  • 1. They fucked up their setup. They assigned 2Gigs to VM and all host has is 2Gigs too? Brilliant.
    2. Since when are you allowed to post benchmarks of MS software?
  • I thought the "Does hyperthreading help in the realy world" issue was related to the way Windows does task switching.

    I am under the impression that hyperthreading helps Windows more than Linux because Windows fails to save certain register states, and thus incures a higher cost, in terms of performance, when it task switches.

    His test uses Windows VM's on top of Linux. Thus I could see when it could help in his situation, but believe that the generalization about hyperthreading is misleading.

    Can anyone clar
  • This is obviously one data point, and as others have mentioned, not even the best point for deployment.

    But there's a whole raft of virtualization solutions available, and that's just in the Linux kernel, not to mention the Windows solutions. It would be fun/interesting to see an updated comparison of the various solutions.

    Then for the real benchmark point, it would be good to see what IBM does with the Big Iron virtualization. Intel and AMD are finally adding hardware support, and it sounds like Intel is im
  • VMWare uses one kind of virtualization, where it intercepts system calls (by loading a module into the kernel that traps faults to privileged instruction execution) and emulates them. When doing heavy amounts of system calls, like using TCP/IP sockets or opening files or reading files that are already open or writing to the UNIX socket for a database (which reads and writes to disk a ton) or whatever, the system slows down massively. Paravirtualizing Hypervisors like Xen rely on the guest OS having its co
  • Hyperthreading? Good lord, this is running on ancient hardware. This should be deployed on something that can use either AMD-V or Intel VT. VMWare has hyperthreading support and can show some improvement with "snappiness" but it doesn't seem to help in general throughput over the long run and may contribute to instability.

    Usually, the limiter in this type of setup would be IO. When one virtualizes such a setup, you must reconfigure your application to minimize disk IO (with web servers we cache like crazy a
  • by sheldon ( 2322 ) on Thursday March 29, 2007 @11:57AM (#18529311)
    They state in the test that the servers are dual proc servers.

    VMWare Server, the free edition only emulates a single processor environment for your virtualized host.

    VMWare ESX or whatever they are calling the expensive thing today, has the ability to give your virtualized host multiple processors.

    So it's not surprising that it could only handle half the load, it only had half the processors.

    We don't do virtualization for heavy use environments. We do it because different business groups don't want to share servers... that is, they can't agree on maintenance windows, etc.
  • Hmmm virualization 101. Don't virulize anything that requires high I/O if you do not have the hardware to do it.
    First of all I built an ESX server farm for high I/O apps. Feel free to search for my name at vmware. I used to work
    at Welch Foods until the new CIO just mentioned to words "Outsource IT" I left before a decision was made so I can
    "stick it to the man". What the heck, the IT Director left anyway. Anyway we created an esx server farm with 8 Dell
    6650s, and we had 2 Dell 6680 prototypes to evaluat
    • Better to say "Don't virtualize stuff with high I/O unless your virtualization solution paravirtualizes the I/O".

      As others have posted, VMWare Server sucks for high-I/O situations, while other solutions (Xen using paravirtualization instead of HW virtualization, and to some degree even Xen with HWV) perform MUCH better in high-I/O situations.

      At least a few tips regarding disk I/O:
      Paravirtualize. Easy with Xen + Linux hosts/guests
      When paravirtualizing, don't use file-backed disk images (e.g. /mnt/virtimages
  • If I read this right, it's an unoptimized install on a barely server-grade system using a dev package rather than a production package. I would *expect* it to run less than stellarly.

    This translates to "brother-in-law IT is crap."

    heck, it might be better to say "VMware server will limit performance degradation to 43% when used in a poorly thought out implementation."
  • As long as CPU's continue to get faster and cheaper, and caches larger, this is hardly a gloom & doom end of the world. Most systems don't tax their CPU's most of the time anyway, and for the next 3 years at least CPU's will continue to get faster and cheaper.
  • Because eventually, over time, a disaster will occur causing 100% downtime (or 0% throughput) on a monolithic system. Being able to restart the virtual machine quickly or even transparently on another piece of hardware without doing restores, hardware driver installations, or any number of other time delaying obstacles will increase the overall performance and total throughput of the application. In other words: A monolithic system can calculate pi to 10,000 places faster than a Virtual Machine. But they
  • Te purpose of VM is to consolidate servers. The reason yu'd want to consolidate is that you find you are running 4 physical boxes and each one is running at about 1/10 of full capacity. So you say "cool I can loose three boxes and cut managmant and electrical power by 3/4. On the otyher hand if you find you need more performance you buy more boxes and "load balance" them

    Also everyone knows you loose some performance running VM but if you need to run OS "A" guest or OS "B" host this is the only way. If
  • Webperformanceinc later released the following reports:
    • A faster connection to the Internet may lead to better response time from Internet servers.
    • Static pages can lead to better server performance than dynamically generated content
    • Powering down servers can save electricity, but leads to up to 100% decreased performance
    • Michael Jackson may have taken plastic surgery

BLISS is ignorance.

Working...