Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When VMware Performance Fails, Try BSD Jails

kdawson posted more than 5 years ago | from the like-a-virtual-machine-without-the-machine-part dept.

Operating Systems 361

Siker writes in to tell us about the experience of email transfer service YippieMove, which ditched VMware and switched to FreeBSD jails. "We doubled the amount of memory per server, we quadrupled SQLite's internal buffers, we turned off SQLite auto-vacuuming, we turned off synchronization, we added more database indexes. We were confused. Certainly we had expected a performance difference between running our software in a VM compared to running on the metal, but that it could be as much as 10X was a wake-up call."

Sorry! There are no comments related to the filter you selected.

Silly faggots (-1, Troll)

bigblacknigger (1440657) | more than 5 years ago | (#28176625)

Don't you know? BSD is dead. Lick my nigger-flavoured ballsack you dirty kikes.

Re:Silly faggots (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28177125)

Sorry about the smell guys. I dropped an Obama in my pants.

This is Ironic, right? (0, Troll)

Fahrvergnuugen (700293) | more than 5 years ago | (#28176675)

Oh the irony

Safari canâ(TM)t open the page âoehttp://www.playingwithwire.com/2009/06/virtual-failure-yippiemove-switches-from-vmware-to-freebsd-jails/â because the server where this page is located isnâ(TM)t responding.

Re:This is Ironic, right? (4, Informative)

mvip (1060000) | more than 5 years ago | (#28176753)

We're working on it. The irony is that this is the only server that is still running as a VM (because it is a hosted VPS).

excellent sales story (5, Interesting)

OrangeTide (124937) | more than 5 years ago | (#28176677)

Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

When applied to a problem it seems to create more performance issues than it solves. But it can make managing lots of services easier. I think that's the primary goal to these VMware-like products.

Things like Xen take a different approach and seem to have better performance for I/O intensive applications. But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

VMware is more like how the Mainframe world has been slicing up mainframes into little bits to provide highly isolated applications for various services. VMware has not caught up to the capabilities and scalability to things IBM has been offering for decades though. Even though the raw CPU performance of a PC is better than a mid-range mainframe at 1% of the cost (or less). But scalability and performance are two separate things, even though we would like both.

Re:excellent sales story (-1, Flamebait)

bigblacknigger (1440657) | more than 5 years ago | (#28176761)

I agree, certain races, such as niggers, spics, kikes and fish-heads, just aren't suitable to function in our modern white society and should be sent back to their respective countries. We should meet up for casual sex, I bet you have a nigger cock.

Re:excellent sales story (2, Insightful)

jgtg32a (1173373) | more than 5 years ago | (#28177149)

Your sig isn't logically sound just because the Jews win because you lost doesn't mean you win when the Jews loose.
Just something I thought I'd point out.

On another side note what happened to the fine art of trolling, people these days just throw a bunch of racial slurs together and think they're all that. In my day it took a certain finesse to troll properly, you had to be well informed on the issue and then speak truths on the issue and then interpret those truths in a way that will set people off.

oh, well
get off of my lawn

Re:excellent sales story (1, Interesting)

MichaelSmith (789609) | more than 5 years ago | (#28177237)

I really should have kept a copy of those "don't feed the trolls" ascii art pictures people used to post on usenet. It would have come in handy here.

Re:excellent sales story (5, Funny)

masshuu (1260516) | more than 5 years ago | (#28177331)

this?
+----------+
|  PLEASE  |
|  DO NOT  |
| FEED THE |
|  TROLLS  |
+----------+
    |  |
    |  |
  .\|.||/..

Re:excellent sales story (1)

MichaelSmith (789609) | more than 5 years ago | (#28177419)

That would do.

free beats fee most of the time (4, Interesting)

xzvf (924443) | more than 5 years ago | (#28176799)

This is slightly off the server virtualization topic, but I had a similar experience with LTSP and some costly competitors. Using LTSP we were able to put up 5X the number of stable Linux desktops on the same hardware. I'd tell every organization out there to do a pilot bake-off as often as possible. It won't happen all the time, but I suspect that more often than not, the free open solution, properly setup will beat the slickly marketed, closed proprietary solution.

Re:excellent sales story (4, Informative)

gfody (514448) | more than 5 years ago | (#28176905)

Most of the performance issues and I think also the issue faced in TFA have to do with IO performance when using virtual hard drives especially of the sparse-file, auto-growing variety. If they would configure their VMs to have direct access to a dedicated volume they would probably get their 10x performance back in DB applications.

It would be nice to see some sort of virtual SAN integrated into the VMs

Re:excellent sales story (2, Interesting)

ckaminski (82854) | more than 5 years ago | (#28177063)

What are you talking about? ESX has supported REAL SANS since almost day one. I've been able to GREAT things on a single vmware server, in one instance I managed 25 2GB J2EE app VMs on a quad core XEON (2005 era). In another I managed 168 sparsely used testing VMs (2x quad core). But I've ALWAYS had trouble with databases and Citrix, in particular, with VMware.

Storage is only part of the issue. Having to run 10-160 schedulers *IS* the issue. Vmware doesn't utilize efficiencies in this arena like Xen or Jails, or OpenVZ or Solaris Containers can.

Re:excellent sales story (1)

gfody (514448) | more than 5 years ago | (#28177279)

If you have a real SAN sure, IO performance is probably not your problem. If not then you might just try to use a sparse-file virtual hard disk and experience incredibly bad IO performance. My experience is really only with VirtualBox where the virtual disk is the only thing available in the UI and setting up direct disk access is advanced, text-based config - I'm not sure if it's like this with ESX - but I think it would be nice if instead of that whole virtual hard disk crap if the VM host was also a SAN server for your VMs and you just always use SAN.

Re:excellent sales story (5, Informative)

mysidia (191772) | more than 5 years ago | (#28177251)

Totally unnecessary. If you want a 'virtual SAN', you can of course create one using various techniques. The author's biggest problem is he's running VMware Server 1, probably on top of Windows, and then tried VMware Server 1 on top of Ubuntu.

Running one OS on top of another full-blown OS, with several layers of filesystem virtualization, no wonder it's slow; a hypervisor like ESX would be more appropriate.

VMware Server is great for small-scale implementation and testing. VMware server is NOT suitable for mid to large-scale production grade consolidation loads.

ESX or ESXi is VMware's solution for such loads. And by the way, a free standalone license for ESXi is available, just like a free license is available for running standalone VMware server.

And the I/O performance is near-native. With ESX4, on platforms that support I/O virtualization , Vt-d/IOMMU, in fact, the virtualization is hardware-assisted.

The VMware environment should be designed and configured by someone who is familiar with the technology. A simple configuration error can totally screw your performance. In VMware Server, you really need to disable memory overcommit and shut off page trimming, or you'll be sorry -- and there are definitely other aspects of VMware server that make it not suitable at all (at least by default) for anything large scale.

It's more than "how much memory and CPU" you have. Other considerations also matter, many of them are the same considerations for all server workloads... e.g. how many drive spindles do you have at what access latency, what's your total IOPs?

In my humble opinion, someone who would want to apply a production load on VMware server (instead of ESX) is not suitable briefed on the technology, doesn't understand how piss-poor VMware server's I/O performance is compared to ESXi, or just didn't bother to read all the documentation and other materials freely available.

Virtualization isn't a magic pill that lets you avoid properly understand the technology you're deploying, make bad decisions, and still always get good results.

You get FreeBSD jails up and running, but you basically need to be skilled at FreeBSD, and understand how to properly deploy that OS in order to do it.

Otherwise, your jails might not work correctly, and someone else could conclude that FreeBSD jails suck, stick with OpenVZ VPSes or Solaris logical domains.

Re:excellent sales story (4, Informative)

Feyr (449684) | more than 5 years ago | (#28177317)

seconded. last time i tried, vmware server couldn't handle a single instance of a lightly loaded db server. moving to esx we're running 6 VM on that same hardware and the initial server has near-native performances

in short. use the right tool for the right job, or you have no right to complain

Re:excellent sales story (2, Interesting)

Thundersnatch (671481) | more than 5 years ago | (#28177283)

It would be nice to see some sort of virtual SAN integrated into the VMs

Something like this [hp.com] you mean? Turns the local storage on any VMware host into part of a full-featured, clustered, iSCSI SAN. Not cheap though (about $2500 per TB)

Re:excellent sales story (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28177033)

Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

Virtualization makes many things a lot easier. Testing, rollback, provisioning, portability & backup.

The success of virtualization is due to failures of the software industry to have good separation between applications & operating systems. The one-application-per-server trend is the result, which leads to a lot of idle capacity.

Re:excellent sales story (4, Informative)

Eil (82413) | more than 5 years ago | (#28177047)

But a Xen hypervisor VM is in some ways more similar to a BSD jail than it is to VMware's monitor.

Actually, Xen is not at all similar to a BSD jail, no matter how you look at it. Xen does full OS virtualization from the kernel and drivers on down to userland. A FreeBSD is basically chroot on steroids. The "virtualized" processes run exactly the same as "native" ones, they just have some restrictions on their system calls, that's all.

I guess the thing that bugged me about the most about TFA was the fact that they were using VMWare Server and actually expecting to get decent performance out of it. Somebody should have gotten fired for that. VMWare server is great for a number of things, but performance certainly isn't one of them. If they wanted to go with VMWare, they should have shelled out for ESX in the beginning instead of continually trying to go the cheap route.

Re:excellent sales story (1)

QuoteMstr (55051) | more than 5 years ago | (#28177123)

First of all, you have to admit that the product line names are confusing. You'd expect a product with the word "server" in its title to be useful for, well, servers. Second, even ESX is still less efficient than just using a kernel to isolate different processes. That's what it's there for, after all.

Re:excellent sales story (4, Insightful)

syousef (465911) | more than 5 years ago | (#28177225)

Virtualization is an excellent story to sell. It is a process that can be applied to a wide range of problems.

Screw-drivers are an excellent tool. However if you're in a position to buy tools for your company, you should know enough to show me the door if I try to sell you a screw driver to shovel dirt.

Right tool. Right job.

In any industry:

Poor management + slick marketing = Disaster

Interesting (2, Funny)

kspn78 (1116833) | more than 5 years ago | (#28176681)

I wonder if this would help me, I am running 2 VMWare servers on an older box and it is a little lethargic at the moment. If I could ever get to the story I might be able to find out :|

Re:Interesting (1)

symbolset (646467) | more than 5 years ago | (#28176857)

If by "Older box" you mean more than 3 months old, it's time to upgrade :)

Re:Interesting (1)

machine321 (458769) | more than 5 years ago | (#28177297)

Don't use an older box, get a newer box with a CPU that does virtualization. That makes all the difference.

When it fails? When did VMWare perf. NOT fail? (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28176685)

Let me know when the its fat price tag finally results in anything on the performance side that beats VBox or Xen.

Back to the Future? (5, Informative)

guruevi (827432) | more than 5 years ago | (#28176689)

So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

Re:Back to the Future? (1)

hpavc (129350) | more than 5 years ago | (#28176877)

Comparing ESX and Zones? Seems like a horridly thought out comparison

Re:Back to the Future? (1, Insightful)

Anonymous Coward | more than 5 years ago | (#28176965)

Virtualization, as far as it is applied to server software, is a kludge. It is a way of making software work on one machine which would otherwise have conflicting OS and security requirements. The virtualization layer provides the abstraction and isolation which should be provided by the OS but isn't. The reason is API complexity: Virtualization deals with a relatively small and low level API. The operating system has a much broader API with much more complex dependencies, so it is much harder to secure and test for incompatibilities.

It should not surprise anyone that removing redundancies (to save money) likely also decreases fault tolerance. On the other hand, it is often beneficial to remove "accidental" redundancy and add redundancy back in with a plan.

Re:Back to the Future? (5, Insightful)

ckaminski (82854) | more than 5 years ago | (#28177079)

Consolidate several lightly used, different services onto ONE server? Have you ever managed multiple applicatoins in a heterogenous environment? Consolidating applications causes operational complexity that is inappropriate in a lot of instances. While service isolation is easy on Unix platforms, it's not on Windows.

Re:Back to the Future? (5, Interesting)

wrench turner (725017) | more than 5 years ago | (#28177243)

Running multiple services on one OS requires that when you must reboot a server because of an OS bug or mis-configuration all of the services are brought down... Same if it crashes or hangs. As compelling as that is I've never used a hypervisor in 30 years on 10's of thousands of servers.

I do routinely use chroot jails on thousands of servers to isolate the application from the host OS. This way I do not need to re-qualify any tools when we implement an OS patch.

Check it out: http://sourceforge.net/projects/vesta/ [sourceforge.net] :-)

Re:Back to the Future? (5, Interesting)

Mr. Flibble (12943) | more than 5 years ago | (#28177277)

So we go back to where we started from: chroot and jails. What really is the benefit of extended virtualization? I haven't "embraced" it as I am supposed to do.

I can see where it makes sense if you want to merge several servers that do absolutely nothing all day into a single machine but a decent migration plan will run all those services on a single 'non-virtual' server. Especially when those machines are getting loaded, the benefits of virtualization quickly break down and you'll have to pay for more capacity anyway.

This is exactly what VMware lists as best practice for using virtualization. If a server is maxing out, it should not be virtualized as it is not a good candidate. However, if you have a number of servers that are under utilized, then the advantage of turning them into VMs become clear. VMware has a neat feature called Transparent Page Sharing, where VMs using the same sections of memory with the same bitmaps across the same images are all condensed down into the same single pages of memory in the ESX server. This means that your 10 (or more) windows 2003 server images "share" the same section of RAM, this frees up the "duplicate" RAM across those images. I have seen 20% of RAM saved by this, IIRC it can go above 40%.

As far as high availability goes: again, low cost HA doesn't work that well. I guess it's beneficial to management types that count the costs of but don't see the benefit in leaving a few idle machines running.

If you mean VMware HA, I find it works quite well, granted the new version in Vsphere (aka Virtual Center 4) is much better as it supports full redundancy.

Then you have virtualized your whole rack of servers into a quarter rack single blade solution and a SAN that costs about the same than just a rack of single servers but you can't fill the rack because the density is too high. And like something that recently happened at my place: the redundant SAN system stops communicating with the blades because of a driver issue and the whole thing comes crashing down.

You are assuming that the people don't have this already. I have been to a number of data centers that have racks and racks of under-utilized machines that also have SAN storage. VMware Consolidation is a way of consolidating the hardware you already have to run your ESX hosts. You use a program called VMware Conveter to do P2V (Physical to Virtual) to convert the real hardware machines to VMs, then you reclaim that hardware and install ESX on it, freeing up more resources. You don't always have to run out and buy new hardware!

VMs are great when the hardware is under-utilized, I do not recommend VMs that max out, and neither does VMware.

Re:Back to the Future? (5, Insightful)

gdtau (1053676) | more than 5 years ago | (#28177431)

"What really is the benefit of extended virtualization?

1) The ability to deploy a system image without deploying physical hardware. All those platforms you are meant to have, but don't: a build machine, an acceptance test machine, a pre-production test machine. And if you've done all the development and testing on a VM then changing the machine when it moves from production from a VM to being real hardware doesn't seem worth the risk.

2) IT as a territorial dispute. You are the IT Director for a large enterprise. You want everything in good facilities, what after the last time a cleaner unplugged the server that generates customer quotes, bringing revenue to a screaming halt. The owner of the quotes server will barely come at that. They certainly won't hand over sysadmin control. Their sysadmins like whitebox machines (the sysadmin's brother assembles them), but you'll never have parts on the shelf for that if it breaks. So get them to hand over a VM image, which you run on hardware of your choice, and which you can backup and restore for them.

3) Single hardware image. No more getting a "revised" model server and finding that the driver your OS needs isn't available yet (or better still, won't ever be available for that OS, since the manufacturer really only supports new hardware in their forthcoming releases). And yeah, the server manufacturer has none of the previous model in stock.

And of course there's minor stuff. Like being able to pull up a shiny clean enterprise image to replicate faults.

You'll notice the lack of the word "silver bullet" above. Because virtualisation isn't. But it does have a useful role, so the naysayers aren't right either.

I'm waiting for the realisation that merely combining images onto one physical machine does not do much to lower costs. For a directly-administered Windows OS the sysadmin's time was costing you more than the hardware. Now that the hardware is gone can you really justify maybe $50kpa/5 = $10pa per image for sysadmin overhead? This is particularly a problem for point (2) above, as they are exactly the people likely to resist the rigorous automation needed to get sysamdin per image overhead to an acceptable point (the best practice point is about $100 per image -- the marginal cost of centrally-administered Linux servers. You'll notice that's some hundreds of times less than worst-practice sysadmin overhead).

I'll also be a bit controversial and note that many sysadmins aren't doing themselves any favours here. How often do you read on Slashdot of time-consuming activities just to get a 5% improvement. If that 5% less runtime costs you 5% more sysadmin time then you've already increased costs by a factor of ten.

UML FTW! (1)

morgan_greywolf (835522) | more than 5 years ago | (#28176693)

Or there's always User-Mode Linux [sourceforge.net] .

Re:UML FTW! (1)

Just Some Guy (3352) | more than 5 years ago | (#28176791)

Yeah. Now imagine that virtualized processes run exactly as fast as "native" processes. Starting to sound pretty good?

Re:UML FTW! (3, Informative)

solafide (845228) | more than 5 years ago | (#28177067)

UML is possibly the worst-maintained part of the Linux kernel. Don't try building it in any recent kernel. It won't compile.

-1, Flamebait (0)

Anonymous Coward | more than 5 years ago | (#28176709)

TFA: "Error establishing a database connection"

So much for that. Also, am I correct in assuming BSD's jail is the equivalent of Linux's chroot? Is this another case of "Didn't know I should have been limiting processes instead of visualizing another OS for a single process" stories? I mean .. isn't that, well, obvious?

Re:-1, Flamebait (4, Informative)

eosp (885380) | more than 5 years ago | (#28176747)

Well, the BSDs all have chroot as well. However, jails have their own sets of users (you can have root in one jail but not in the system at large) and the kernel makes more division between the data structures from jails (and the host system) than chroot does. In addition, ps(1) can only show in-jail processes, network configuration changes are impossible, and kernel modifications (modules and securelevel changes) are banned.

Re:-1, Flamebait (1)

jelle (14827) | more than 5 years ago | (#28177015)

Linux has that with: http://linux-vserver.org/ [linux-vserver.org]

Both linux vservers and bsd jails have existed for plenty of years before vmware, xen, virtualbox, etc.

And, on that subject, this http://en.wikipedia.org/wiki/LinuxPMI [wikipedia.org] is based on 'mosix', which made your cluster of linux boxes appear as one single massive machine, with transparent process migration and all that.

There are lots of virtualization and clustering options out there.

Re:-1, Flamebait (0)

larry bagina (561269) | more than 5 years ago | (#28176923)

Linux's chroot is actually BSD's chroot. Bill Joy invented it.

Re:-1, Flamebait (1)

Guy Harris (3803) | more than 5 years ago | (#28177287)

Linux's chroot is actually BSD's chroot. Bill Joy invented it.

BSD's chroot is actually V7's chroot. Ken, Dennis, and company invented it.

As for the database... (1)

orngjce223 (1505655) | more than 5 years ago | (#28176949)

I think they got /.'d.

XenServer worked for us (4, Interesting)

gbr (31010) | more than 5 years ago | (#28176731)

We had performance issues with VMWare Server as well, especially in the disk I/O area. Converting to XenServer from Citrix solved the issues for us. We have great speed, can virtualize other OS's, and management is significantly better.

XenServer from Citrix -- eewww (4, Interesting)

xzvf (924443) | more than 5 years ago | (#28176891)

XenServer is a great product and has many skilled developers. The "from Citrix" really gives me a queasy feeling. I know the products are solid and innovative, but so many people I hear out in the wild, scream and run from Citrix. It might be behind the reason Ubuntu and Red Hat are backing KVM for virtualization. Even to the point where RH bought Qumarant (KVM "owners").

Re:XenServer from Citrix -- eewww (1)

gbr (31010) | more than 5 years ago | (#28176941)

I have to agree, the 'from Citrix' makes me queasy for a couple of reasons.

1. I've had issues with Citrix products in the past
2. Xen is the work of many people, not just Citrix.

Issue 2 compensated for issue 1, and it was further assuaged by the performance of the VM's. Very nice. I was also nice that Citrix made XenServer free just as we were about to write a check.

Re:XenServer worked for us (0)

Anonymous Coward | more than 5 years ago | (#28176969)

Shouldn't you be comparing XenServer to ESX instead of VMware's free hosted virtualization product? I don't see how the comparison here is fair. It's like saying Mercedes' Smart Car is too slow so you went to a BMW M3.

Re:XenServer worked for us (5, Informative)

00dave99 (631424) | more than 5 years ago | (#28176997)

XenServer has some good features, but you really can't compare VMware Server with XenServer. I have many customers that were impressed to be able to run 4 or 5 VMs on VMware Server. Once we got them moved to ESX on the same hardware they couldn't believe that they were running 20 to 25 VMs on the same hardware. That being said back end disk configuration is the most important design consideration on any virutalization product.

Re:XenServer worked for us (5, Interesting)

ckaminski (82854) | more than 5 years ago | (#28177097)

I broke VMware ESXs upper CPU limit of 168 vcpus with 104 running VMs. About 20 of which were under any significant load. 24ghz of CPUs and 32 GB of memory. Pretty damn impressive, if you ask me.

Re:XenServer worked for us (2, Funny)

machine321 (458769) | more than 5 years ago | (#28177311)

management is significantly better.

That usually solves a lot of performance problems.

Sounds about right (5, Informative)

Just Some Guy (3352) | more than 5 years ago | (#28176755)

We use jails a lot at my work. We have a few pretty beefy "jail servers", and use FreeBSD's ezjail [erdgeist.org] port to manage as many instances as we need. Need a new spamfilter, say? sudo ezjail-admin create spam1.example.com 192.168.0.5 and wait for 3 seconds while it creates a brand new empty system. It uses FreeBSD's "nullfs" filesystem to mount a partially populated base system read-only, so your actual jail directly only contains the files that you'd install on top of a new system. This saves drive space, makes it trivially easy to upgrade the OS image on all jails at once (sudo ezjail-admin update -i), and saves RAM because each jail shares the same copy of all the base system's shared libraries.

For extra fun, park each jail on its own ZFS filesystem and take a snapshot of the whole system before doing major upgrades. Want to migrate a jail onto a different server? Use zfs send and zfs receive to move the jail directory onto the other machine and start it.

The regular FreeBSD 7.2 jails already support multiple IP addresses and any combination of IPv4 and IPv6, and each jail can have its own routing table. FreeBSD 8-CURRENT jails also get their own firewall if I understand correctly. You could conceivably have each jail server host its own firewall server that protects and NATs all of the other images on that host. Imagine one machine running 20 services, all totally isolated and each running on an IP not routable outside of the machine itself - with no performance penalty.

Jails might not be the solution to every problem (you can't virtualize Windows this way, although quite a few Linux distros should run perfectly), but it's astoundingly good at the problems it does address. Now that I'm thoroughly spoiled, I'd never want to virtualize Unix any other way.

Re:Sounds about right (1)

drmofe (523606) | more than 5 years ago | (#28176925)

Someone want to compare and contrast FreeBSD jails with openbsd + sysjail?

Re:Sounds about right (2, Informative)

Just Some Guy (3352) | more than 5 years ago | (#28177207)

I'm not too up on sysjail, but it looks like it's implemented on top of systrace while jails are explicitly coded into the kernel. That probably made sysjail easier to write, but the FreeBSD work has paid off now that they're starting to virtualize the whole network stack so that each jail can have its own firewall and routing.

More to the point: the sysjail project is no longer maintained [sysjail.bsd.lv] .

Re:Sounds about right (3, Insightful)

larry bagina (561269) | more than 5 years ago | (#28177271)

sysjail is vulnerable to race conditions

Re:Sounds about right (0)

Anonymous Coward | more than 5 years ago | (#28177001)

Agree 100%. We are a BSD shop, and we have been enjoying jails for quite a while. Sandboxing, virtualizing, security advantages. It's great!

Re:Sounds about right (1)

d3matt (864260) | more than 5 years ago | (#28177169)

Just as a curiosity... Have you guys ever used jails for cross-compiles similar to scratchbox [scratchbox.org] ?

Re:Sounds about right (1)

Just Some Guy (3352) | more than 5 years ago | (#28177249)

I haven't. After looking at that, I'm not sure what you have in mind. Explain a bit and maybe I can help.

What's the diff between jail and zone? (2)

Vip (11172) | more than 5 years ago | (#28176797)

FTA, "Jails are a sort of lightweight virtualization technique available on the FreeBSD platform. They are like a chroot environment on steroids where not only the file system is isolated out but individual processes are confined to a virtual environment - like a virtual machine without the machine part."

Not knowing much about FreeBSD and it's complementary software, what is the difference between FreeBSD Jail and Solaris Zones?
A Solaris Zone could also be described the same way.

Vip

One runs on Solaris, one runs on BSD (4, Interesting)

_merlin (160982) | more than 5 years ago | (#28176931)

FreeBSD Jails are the same thing as Solais Zones, just on FreeBSD. Since FreeBSD is about evil daemons, they need an evil-sounding marketing name for it. More seriously, they probably just didn't want to bring on the wrath of lawyers for trademark infringement.

Re:One runs on Solaris, one runs on BSD (5, Informative)

jbellis (142590) | more than 5 years ago | (#28176983)

> they probably just didn't want to bring on the wrath of lawyers for trademark infringement.

FreeBSD jails predate Solaris zones by five years.

Not surprising (0)

Anonymous Coward | more than 5 years ago | (#28176817)

Virtual machines tend to be fast in theory, but slow in practice. Just look at Java.

Government IT is being poisoned by virtualization (5, Interesting)

kriston (7886) | more than 5 years ago | (#28176825)

The new buzzword of Virtualization has reached all corners of the US Government IT realm. Blinded by the marketing hype of "consolidation" and "power savings" agencies of the three-letter variety are falling over themselves awarding contracts to "virtualize" the infrastucture. Cross-domain security be damned, VMWare and Microsoft SoftGrid Hyper-v Softricity Whatevers will solve all their problems and help us go green at the very same time, for every application, in every environment, for no reason.

This is the recovery from the client-server binge-and-purge of the 1990s.

Here we go again.

We have no history (0, Redundant)

QuoteMstr (55051) | more than 5 years ago | (#28176979)

I hate to link to my own comment [slashdot.org] , but it seems particularly relevant here.

"Here we go again" indeed. Hell, I wasn't around for the first go-round and I recognize it when I see it.

Virtualization != Performance (4, Insightful)

gmuslera (3436) | more than 5 years ago | (#28176841)

If you really need all the performance you can get for a service, don't virtualize it, or at least check that what you can get is enough, Virtualization have a lot of advantages, but dont give you the full resources of the real machine is running into (and if well how much you lose depend on the kind of virtualization you use, still wont be full). Maybe the 10x number could be VMWare fault or just a reasonable consequence of how is doing virtualization (maybe taking into account disk IO performance you could explain a good percent of that number).

Solaris Zones also (4, Informative)

ltmon (729486) | more than 5 years ago | (#28176853)

Zones are the same concept, with the same benefit.

An added advantage Solaris zones have is flavoured zones: Make a Solaris 9 zone on a Solaris 10 host, a Linux zone on a Solaris 10 host and soon a Solaris 10 zone on an OpenSolaris host.

This has turned out much more stable, easy and simply effecient than our Vmware servers, which we now only have for Windows and other random OS's.

Re:Solaris Zones also (3, Informative)

Anonymous Coward | more than 5 years ago | (#28177299)

Zones are just the operating system partitioned, so it doesn't make sense to run linux in a zone. You can however, run a linux branded zone, which emulates a linux environment, but it's not the same as running linux in a zone. It's running linux apps in solaris.

LDOMS are hardware virtualization, so you can run Linux in them. Only some servers are supported, though.

Just thought i better clarify.

Is this a surprise? (3, Insightful)

diamondsw (685967) | more than 5 years ago | (#28176889)

Amazing! Not running several additional copies of an operating system with all of the needless overhead involved is faster! Who would have guessed?

Sometimes a virtual machine is far more "solution" than you need. If you really want the same OS with lots of separated services and resource management... then run a single copy of the OS and implement some resource management. Jails are just one example - I find Solaris Containers to be much more elegant. Of course, then you have to be running Solaris...

Re:Is this a surprise? (-1, Flamebait)

Anonymous Coward | more than 5 years ago | (#28176981)

Ah yes, cue all the idiot blow-hard, know-it-alls on Slashdot....

Re:Is this a surprise? (0, Redundant)

QuoteMstr (55051) | more than 5 years ago | (#28177003)

Or Linux containers [ibm.com] for that matter.

(Or for something more mature today, but implemented as a large out-of-tree patch, OpenVZ [openvz.org] )

Different tools for different jobs (5, Interesting)

ErMaC (131019) | more than 5 years ago | (#28176913)

So I would love to RTFA to make sure about this, but their high-performance web servers running on FreeBSD jails are down, so I can't...

But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago. That means that either:
A) They were running on the Hosted VMware Server product, whose performance is NOT that impressive (it is a Hosted Virtualization product, not a true Hypervisor)
or B) They were running the unsupported OS on ESX Server, which means there was no VMware Tools available. The drivers included in the Tools package vastly improve things like storage and network performance, which means no wonder their performance stunk.

But moreover, Jails (and other OS-virtualization schemes) are different tools entirely - comparing them to VMware is an apples-to-oranges comparison. Parallels Virtuozzo would be a much more apt comparison.

OS-Virtualization has some performance advantages, for sure. But do you want to run Windows and Linux on the same physical server? Sorry, no luck there, you're virtualizing the OS, not virtual machines. Do you want some of the features like live migration, high availability, and now features like Fault Tolerance? Those don't exist yet. I'm sure they will one day, but today they don't, or at least not with the same level of support that VMware has (or Citrix, Oracle or MS).

If you're a company that's trying to do web hosting, or run lots of very very similar systems that do the same, performance-centric task, then yes! OS Virtualization is for you! If you're like 95% of datacenters out there that have mixed workloads, mixed OS versions, and require deep features that are provided from a real system-level virtualization platform, use those.

Disclosure: I work for a VMware and Microsoft reseller, but I also run Parallels Virtuozzo in our lab, where it does an excellent job of OS-Virtualization on Itanium for multiple SQL servers...

Guys - We found the iTanic customer (2, Funny)

Mr Thinly Sliced (73041) | more than 5 years ago | (#28177011)

Parent poster admits to using iTanic - someone tie his hands to the tree while I call the Vet.

We will tranq him and put him in a zoo. This will mean big things for us, big things. Tours on broadway, my picture on the cover of Time....

Re:Different tools for different jobs (1, Informative)

Anonymous Coward | more than 5 years ago | (#28177105)

But here's what I do know. FreeBSD hasn't been a supported OS on ESX Server until vSphere came out less than two weeks ago.

Really? VMware tools for freebsd have been available for years. You can even run them on openbsd (with freebsd compatibility mode enabled).

There's even this slashdot [slashdot.org] story from 2004 about freebsd 4.9 being supported as an esx guest.

Re:Different tools for different jobs (1)

mevets (322601) | more than 5 years ago | (#28177281)

|There's even this slashdot story from 2004 about freebsd 4.9 being supported as an esx guest.

Yes, but that was before bsd was confirmed dead.

OpenVZ & Virtuozzo are my favorite way to go (1)

pyite69 (463042) | more than 5 years ago | (#28176919)

I would expect that the BSD product is similar in design - basically chroot on steroids.

I/O on the free "VMWare Server" sucks (2, Informative)

mrbill (4993) | more than 5 years ago | (#28176937)

The I/O performance on the free "VMWare Server" product *sucks* - because it's running on top of a host OS, and not on the bare metal.
I'm not surprised that FreeBSD Jails had better performance. VMWare Server is great for test environments and such, but I wouldn't ever use it in production.
It's not at all near the same class of product as the VMWare Infrastructure stuff (ESX, ESXi, etc.)

VMWare offers VMWare ESXi as a free download, and I/O performance under it would have been orders of magnitude better.
However, it does have the drawback of requiring a Windows machine (or a Windows VM) to run the VMWare Infrastructure management client.

Re:I/O on the free "VMWare Server" sucks (4, Informative)

zonky (1153039) | more than 5 years ago | (#28177025)

ESXi does also have many limitations around supported hardware. That said, there are some good resources around running ESXi on 'white box' hardware.

http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php [vm-help.com]

Re:I/O on the free "VMWare Server" sucks (1)

snookums (48954) | more than 5 years ago | (#28177223)

There's overhead, but not 10x worse performance unless you're hitting the disk far more in the VM than you were in the native deployment.

The "gotcha" is that VMWare Server will, by default, use file-backed memory for your VMs so that you can get in a situation where the VM is "thrashing", but neither the host nor guest operating system shows any swap activity. The tell-tale sign is that a vmstat on the host OS will show massive numbers of buffered input and output blocks (i.e. disk activity) when you're doing things in the VM which should not require this amount of disk troughput.

A possible solution is:

1. Move the backing file to tmpfs*
2. Increase your mounted tmpfsto cover most of the host machine RAM (I'd say total RAM - 1 GB).
3. Allocate RAM to your VMs in such a way that you are not over-committed (total of all VMs not more than tmpfs size set at step 2).

*Take a look at the option mainMem.useNamedFile = "FALSE"

Virtualization doesn't make sense (5, Interesting)

QuoteMstr (55051) | more than 5 years ago | (#28176945)

Well, in one case it does: when you're trying to run a different operating system simultaneously on the same machine. But in most "enterprise" scenarios, you just want to set up several isolated environments on the same machine, all running the same operating system. In that case, virtualization is absofuckinglutely insane.

Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?

Hypervisors have become more and more complex, and a plethora of APIs for virtualization-aware guests has appeared. We're reinventing the kernel-userland split, and for no good reason.

Technically, virtualizaiton is insane for a number of reasons:

  • Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical
  • TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.
  • A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest
  • Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made.

    In having to set aside memory for each guest, we're returning to the OS9 memory mangement model. Not only are we reinventing the wheel, but we're reinventing a square one covered in jelly.

FreeBSD's jails make a whole lot of sense. They allow several users to have their own userland while running under the same kenrel --- which vastly improves, well, pretty much everything. Linux's containers will eventually provide even better support.

Re:Virtualization doesn't make sense (1)

MichaelSmith (789609) | more than 5 years ago | (#28177109)

If you are going to hire cheap MCSEs to manage all your systems, including the unix ones then it makes sense to be able to put those unix systems inside a little box on your screen with nice borders around it so you can easily see what connects to what.

Saving money on hardware will just cost you kickbacks from the supplier anyway. There is no advantage in that.

Re:Virtualization doesn't make sense (0)

Anonymous Coward | more than 5 years ago | (#28177195)

Saving money on hardware will just cost you kickbacks from the supplier anyway. There is no advantage in that.

If you honestly believe that, I have some $10 off your next $1,000,000 purchase coupons I'd like to send you...

Re:Virtualization doesn't make sense (5, Insightful)

syousef (465911) | more than 5 years ago | (#28177203)

Virtualization DOES make sense, when you're trying to solve the right problem. Do not blame the tool for the incompetence of those using it. It's no good using a screwdriver to shovel dirt and then blaming the screwdriver.

Virtualization is good for many things:
- Low performance apps. Install once, run many copies
- Excellent for multiple test environments where tests are not hardware dependant
- Infrequently used environments, like dev environments, especially where the alternate solution is to provide physical access to multiple machines
- Demos and teaching where multiple operating systems are required
- Running small apps that don't run on your OS of choice infrequently

Virtualization is NOT good for:
- High performance applications
- Performance test envrionemnts
- Removing all dependence on physical hardware
- Moving your entire business to

Your specific concerns:
# Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical

Actually this depends on your virtualization solution

# TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem.

So is hard disk access from multiple virtual operating systems contending for the same disk (unless you're going to have one disk per guest OS...even then are you going through one controller?) Resource contention is a trade-off. If all your systems are going to be running flat out simultaneously virtualization is a bad solution.

# A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest

You can often mount the virtual disks in a HOST OS. No different to needing software to access multiple partitions. As long as the software is available, it's not as big an issue.

# Memory management is an absolute clusterfuck. From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself. This mutual myopia renders the usual page-cache algorithms absolutely useless. Each guest blithely performs memory management and caching on its own resulting in severely suboptimal decisions being made

A lot of operating systems are becoming virtualization aware, and can be scheduled cooperatively to some degree. That doesn't mean your concern isn't valid, but there is hope that the problems will be reduced. However once again if all your virtual environments are running flat out, you're using virtualization for the wrong thing.

Re:Virtualization doesn't make sense (1)

QuoteMstr (55051) | more than 5 years ago | (#28177221)

A lot of operating systems are becoming virtualization aware

Which ends up being as complex as the kernel-userland boundary, so why not just use a kernel-userspace boundary in the first place?

Re:Virtualization doesn't make sense (2, Interesting)

billybob_jcv (967047) | more than 5 years ago | (#28177305)

Sorry, but I think you're missing several important points. In a company with several hundred physical servers and limited human resources, no one has the time to fool around with tuning a kernal and several apps to all run together in the same OS instance. We need to build standard images and deploy them very quickly, and then we need a way to easily manage all of the applications. We also need to be able to very quickly move applications to different HW when they grow beyond their current resources, we refresh server HW or there is a HW failure. High Availability is expensive, and it is just not feasible for many midrange applications that are running on physical boxes. Does all of this lead to less than optimal memory & I/O performance? Sure - but if my choice is hiring 2 more high-priced server engineers, or buying a pile of blades and ESX licenses, I will bet buying more HW & SW will end up being the better overall solution.

It's the Apps more than the OS! (0)

Anonymous Coward | more than 5 years ago | (#28177339)

Disclaimer: TFA was down (slashdotted) so I didn't bother to read it.

You're thinking too much about the OS and not enough about the apps, which are the entire reason why we have computers.

If applications were written well, and played nice with others, and had realistic sizing requirements/guidelines then I could see your point. However a lot of apps frankly are poorly written with this idea of 'I can do anything on my OS that I want to.' That leads to having to silo applications as well as oversized servers (just throw lots of hardware at my inefficient program).

Not to mention that in general there are certain workloads that are more appropriate for certain varying OSes, which can also vary depending on the IT staff supporting said applications. I don't want to have separate hardware for each different OS.

The underlying OS architecture can matter somewhat, but until developers write better apps it's not as big a deal as one might think would be my 2 cents.

Crappy programing beats virtualization overhead cost. (though I'd love if that wasn't the case!)

Re:Virtualization doesn't make sense (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28177439)

"Each guest needs its own kernel, so you need to allocate memory and disk space for all these kernels that are in fact identical"

Wrong - transparent page sharing and linked cloning address both of these "problems," which BTW also exist in a physical world. Keeping the kernels separate is a good thing when dealing with the typical shit applications that get installed in the average datacenter. (Yes, I know TPS and linked clones are only available on one product.)

"TLB flushes kill performance. Recent x86 CPUs address the problem to some degree, but it's still a problem."

Wrong - Hardware virtualization (AMD-V and Intel VT) address this nicely. (And also paravirt to a lesser extent.)

"A guest's filesystem is on a virtual block device, so it's hard to get at it without running some kind of fileserver on the guest"

WTF are you even talking about there? Get at it from where?

"From the point of view of the host, each guest's memory is an opaque blob, and from the point of view of the guest, it has the machine to itself."

Wrong - tools installed in the guest give the host a window into the VM, which the hypervisor can use to make smart decisions about memory allocation.

"FreeBSD's jails make a whole lot of sense."

Maybe for FreeBSD apps, but what percentage of datacenter apps run on FreeBSD? Maybe 10 percent? (Probably far less.)

"Operating systems have been multi-user for a long, long time now. The original use case for Unix involved several users sharing a large box. Embedded in the unix design is 30 years of experience in allowing multiple users to share a machine --- so why throw that away and virtualize the whole operating system anyway?"

Virtualization is not about users sharing the box, it's about applications co-existing on the box, even if those applications require 50 different operating systems. Jails and virtualization solve very different problems. Besides, nobody says that you can't use jails where appropriate and virtualization where appropriate.

Different Operating Systems (-1, Redundant)

Gumbercules!! (1158841) | more than 5 years ago | (#28176957)

In my experience, Chroots and jails are pretty crap at running different operating systems on the one box... last time I checked you couldn't run up a Windows 2008 server in a BSD jail.

I think perhaps that might be something that still goes in VMWare's favour?

Anyway I have a (semi) related VMWare performance question and I am going to try my luck asking it here. I have some consumer (i.e. desktop grade) hardware. ESXi is not supported on it (i.e. it can't recognise the SATA controller or the network card). So my choices for virtualisation (I want to run multiple OSs for development purposes) are VMWare Server 2.0 running on a Linux host or Microsoft Hyper-V 2008 (which qualifies as a true bare-metal hypervisor). I cannot find any reasonable comparisons of performance between these two options online. Does anyone know which is likely to get better performance, in terms of Disk I/O, network, CPU, memory, etc?

Re:Different Operating Systems (1, Funny)

Anonymous Coward | more than 5 years ago | (#28177107)

In your scenario, I'd recommend running DBAN [dban.org]

I don't think you did your research. (5, Informative)

BagOBones (574735) | more than 5 years ago | (#28176971)

If you are separating similar work loads like web apps and databases you are probably better off running them within the same os and database server and separating them via security as the poster realized.

However if you have a variety of services that do not do the same thing you can really benefit from separating them in virtual machines and have them share common hardware.

Virtualization also gives you some amazing fault tolerance options that are consistent across different OS and services that are much easier to manage than individual OS and service clustering options.

Re:I don't think you did your research. (2, Informative)

BagOBones (574735) | more than 5 years ago | (#28177081)

After looking more closely at the article it sounds like they where trying to use VMWare Server instead of ESX, which explains a lot. If that was the case they were then carring the overhead of the host OS, VMLayer and the multiple guest OS. Not something you do with high performance apps.

Interesting... (1)

certain death (947081) | more than 5 years ago | (#28176973)

I can see how running multiple processes would make Jail better for *BSD, but if you want to run an entirely different OS in a VM, it just isn't there. That said, I don't think VMware is as awesome as Xen, but Xen has trouble running certain OSes that VMware can run without issue (within reason), so I think they all have their strong areas of coverage.

Coral Cache (1)

Qubit (100461) | more than 5 years ago | (#28176975)

The cache doesn't help. (1)

Animats (122034) | more than 5 years ago | (#28177027)

That just gets you a cached version of a page with a link to the actual article. The actual article [playingwithwire.com] is more useful.

Re:The cache doesn't help. (1)

Qubit (100461) | more than 5 years ago | (#28177127)

Hmmm... the coral cache is snappy for me; the original link is not even loading yet.

Here's the start of the article, in any case:

Jun 01. Virtual Failure: YippieMove switches from VMware to FreeBSD Jails

Our email transfer service YippieMove is essentially software as a service. The customer pays us to run some custom software on fast machines with a lot of bandwidth. We initially picked VMware virtualization technology for our back-end deployment because we desired to isolate individual runs, to simplify maintenance and to make scaling dead easy. VMware was ultimately proven to be the wrong choice for these requirements.

Ever since the launch over a year ago we used VMware Server 1 for instantiating the YippieMove back-end software. For that year performance was not a huge concern because there were many other things we were prioritizing on for YippieMove â09. Then, towards the end of development we began doing performance work. We switched from a data storage model best described as âoea huge pile of filesâ to a much cleaner sqlite3 design. The reason for this was technical: the email mover process opened so many files at the same time that weâ(TM)d hit various limits on simultaneously open file descriptors. While running sqlite over NFS posed its own set of challenges, they were not as insurmountable as juggling hundreds of thousands of files in a single folder. ...

Virtualization is good enough (4, Informative)

Gothmolly (148874) | more than 5 years ago | (#28177075)

I work for $LARGE_US_BANK in the performance and capacity management group, and we constantly see the business side of the house buy servers that end up running at 10-15% utilization. Why? Lots of reasons - the vendor said so, they want "redundancy", they want "failover" and they want "to make sure there's enough". Given the load, if you lose 10-20% overhead due to VM, who cares ?

Re:Virtualization is good enough (-1, Offtopic)

Tanman (90298) | more than 5 years ago | (#28177293)

I dunno, seems like $LARGE_US_BANK hasn't been doing too good a job keeping its business in order. Maybe they aren't the best run enterprise? I'm just saying . . . mismanagement is mismanagement.

Re:Virtualization is good enough (1)

Mr. Flibble (12943) | more than 5 years ago | (#28177373)

I have done consulting for a number of $LARGE_BANKS and seen exactly what you describe. I am dealing with one large company now that has 3 servers allocated to run a SINGLE piece of software. They don't need 3 servers to run it, and I suggested that they incorporate these 3 machines into their ESX network, but like you just mentioned, they want "failover" and "enough resources". Never mind that I have the same software running now on a single lower server that is running ESXi 3.5 with 9 other VMs on the same machine.

Re:Virtualization is good enough (2, Insightful)

Kjella (173770) | more than 5 years ago | (#28177393)

It's CYA in practise. Here's the usual chain of events:

1. Business makes requirements to vendor: We want X capacity/response time/whatever
2. Vendor to business side: Well, what will you do with it?
3. Business makes requirements to vendor: Maybe A, maybe B with maybe N or N^2 users
4. Vendor to business side: That was a lot of maybes. But with $CONFIG you'll be sure

Particularly if the required hardware upgrades aren't part of the negotiations with the vendor, then it's almost a certainty.

Well, duh! (1, Flamebait)

www.sorehands.com (142825) | more than 5 years ago | (#28177157)

You ask that the OS be put into a virtual machine, would you not expect a big performance hit??? It is only common sense to anyone with any basic computer knowledge. You are adding another layer between the hardware and the program, what do you think would happen?

Re:Well, duh! (2, Informative)

Gothmolly (148874) | more than 5 years ago | (#28177171)

A real hypervisor like used by IBM on their p-series frames doesn't impose this penalty. You're thinking of an emulator.

Re:Well, duh! (0)

Anonymous Coward | more than 5 years ago | (#28177351)

Not necessarily. No expert here, but the means by which you are virtualizing has an effect. The hardware on which you are virtualizing makes a tremendous difference. Visit Sun's site, and pull up everything on VirtualBox. There is a downloadable PDF - "Virtualization for dummies" which I read through last night. Other documents are available, just browse around, and grab them to read. Feel free to search VMWare's site for similar documents, but read.

Yes, almost all VM's today take a performance hit. But, I have two VM's running on my desktop right now at the same time. I still have 20 % real physical memory available, and the CPU jumps from 60% to 80%, depending on what I'm actually doing.

The machine is working pretty closer to capacity than it ever does with only the host machine running, and performance is "good" on all three. Not "excellent", but "good". I won't tolerate thrashing to virtual memory, so the trick is to have enough memory.

Adding one more VM would almost certainly overload my system, causing continous thrashing, and I would simply give up by closing one of them.

On a server, you don't go cheap on memory - you load the thing up. It makes sense to virtualize a machine that sees little traffic, rather than buying all new hardware for it.

With VMWare infra, scripts can keep up with memory and CPU utilization, and actually start up an additional physical machine for the purpose of offloading one or more VM's when the load gets heavy.

As CPU's continue to be developed, and as the software evolves, you can expect virtualization to make more and more sense.

I've seen this before (5, Interesting)

bertok (226922) | more than 5 years ago | (#28177375)

I've seen similar hideous slowdowns on ESX before for database workloads, and it's not VMware's fault.

This kind of slowdown is almost always because of badly written chatty applications that use the database one-row-at-a-time, instead of simply executing a query.

I once benchmarked a Microsoft reporting tool on bare metal compared to ESX, and it ran 3x slower on ESX. The fault was that it was reading a 10M row database one row at a time, and performing a table join in the client VB code instead of the server. I tried running the exact same query as a pure T-SQL join, and it was something like 1000x faster - except now the ESX box was only 5% slower instead of 3x slower.

The issue is that ESX has a small overhead to switching between VMs, and also a small overhead for estabilishing a TCP connection. The throughput is good, but it does add a few hundred microseconds of latency, all up. You get similar latency if your physical servers are in a datacenter environment and are seperated by a couple of a switches or a firewall. If you can't handle sub-millisecond latencies, it's time to revisit your application architecture!

Not saying anything bad about BSD Jails.... (0)

Anonymous Coward | more than 5 years ago | (#28177381)

"we added more database indexes..."

I have no experience with BSD Jails so I can't comment, but...

YES IF YOU ADD NEEDED INDEXES TO A SIGNIFICANTLY SIZED DATABASE A 10X PERFORMANCE INCREASE (OR EVEN FAR GREATER) IS NOT UNHEARD OF

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?