Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Server Consolidation Guide via Virtualization

ScuttleMonkey posted more than 8 years ago | from the so-many-ways-to-play dept.

Software 26

sunshineluv7 writes to tell us TechTarget is running a good overview of 'why, when, and how to use virtualization technologies to consolidate server workloads.' The summary provides links to several podcasts and other articles relating real world experience with how to utilize virtualization to best meet your needs. From the summary: "Advances in 64-bit computing are just one reason that IT managers are taking a hard look at virtualization technologies outside the confines of the traditional data center, says Jan Stafford, senior editor of"

Sorry! There are no comments related to the filter you selected.

I agree (0)

celery stalk (617764) | more than 8 years ago | (#15876897)

They make sense.

Re:I agree (0)

Anonymous Coward | more than 8 years ago | (#15876916)

Me too!

PS: I think I was reading your search history yesterday.

Re:I agree (0)

Anonymous Coward | more than 8 years ago | (#15878794)

Some crazy shit, ain't it?

I agree-CyberTorch. (0, Insightful)

Anonymous Coward | more than 8 years ago | (#15876926)

Except when your virtual server gets hit with a slashdotting and goes up in smoke. Taking out even more domains than usual.

Re:I agree-CyberTorch. (3, Informative)

Anonymous Coward | more than 8 years ago | (#15877189)

If you are suggesting that a critically-stressed virtual machine would somehow detrimentally affect a properly-configured host machine or other properly-configured VMs on the same host to any significant're demonstrating a lack of knowledge about the fundamental principles and concepts of virtualization.

A VM only gets ahold of the resources you give it. If one VM with 512M RAM eats every last bit of memory in a blaze of glory, that doesn't affect dedicated resources elsewhere. Similarly, a properly-configured host will not allow any VM to grab 100% of the host CPU either.

Re:I agree-CyberTorch. (1)

sdpinpdx (66786) | more than 8 years ago | (#15884011)

But things like IO bandwidth can't (AFAIK) be dedicated to a VM. A thrashing VM will drive your IO load up, affecting everything else using that storage device directly, and everything on the host indirectly via the OS disk cache. You could isolate the storage queue delays with a separate stoarage device for each VM, but you're consolidating to avoid that kind of thing.

Re:I agree-CyberTorch. (1)

absinthminded64 (883630) | more than 8 years ago | (#15884508)

Agreed! The parent to which you replied has grasp on paper virtualization concepts but has aparently never tilted other VMs or the host or used a host that wasn't spec'd by NASA with excessive IO. Changing the IO elevator to deadline seems to be really helpful when VMs just absolutely stop due to IO overhead.

A "good" overview? (4, Insightful)

lucabrasi999 (585141) | more than 8 years ago | (#15876958)

That submission is what constitutes a "good overview" these days? Maybe it is, if you are the person trying to drive traffic to sites....

64bit? (1, Insightful)

rf600r (236081) | more than 8 years ago | (#15876971)

What does 64bit have to do with driving virtualization? Are people really that ignorant about 64 bit processors and what the mean and don't mean? Seriously, how are these two technologies correlated?

Re:64bit? (2, Insightful)

ThePiMan2003 (676665) | more than 8 years ago | (#15876987)

Finally enough addressable RAM that you can have more than one virtual server with a decent amount of ram.

Re:64bit? (2, Interesting)

afidel (530433) | more than 8 years ago | (#15877067)

With PAE you could already give each virtual server 4GB to play with up to 64GB total with Windows 2003 Enterprise or 128GB with Datacenter. Linux 2.6 allows up to 64GB through the HIGHMEM_64G flag, all on standard x86 of P2 or later vintage (PPro had rudimentry PAE but implementing it was very hackish)

Re:64bit? (2, Informative)

demon (1039) | more than 8 years ago | (#15878444)

2k3 Datacenter can't support 128 GB on i386; it's not possible, as PAE only adds an extra 4 address bits (going from 32 to 36 bits of physical address space). Also, there are still user process limitations that make it impossible for apps like, say, database servers to address more than 3 GB (not 4 GB; it's a limitation due to kernel address space mappings in a process). x86_64 wipes that out easily, so for healthy sized virtualization environments, it's definitely the preferred environment (and you can still run your i386 apps transparently).

Re:64bit? (1)

afidel (530433) | more than 8 years ago | (#15878547)

That's not what this [] table from MS says along with several other references to PAE on Microsoft's site.

Re:64bit? (1)

tabrisnet (722816) | more than 8 years ago | (#15878721)

Feel free to do the math. the short answer however ((2**36)/(2**30)) = 2**6 = 64. Either that table is wrong, or they're using some other technique.

Also feel free to check that PSE/PAE is 36bits.

DR (4, Insightful)

afidel (530433) | more than 8 years ago | (#15876995)

Disaster Recovery and test environments are the two biggest reason's I can see for using virtualization. Having the ability to pick up your system and plop it on any old box makes things so much easier. In theory HAL's should have made this possible years ago but they never really lived up to their promise. As to virtualization making management easier, bullocks. Some of the tools bundled with good virtualization products like ESX might make management somewhat easier, but you still need additional good tools to make management bareable for large numbers of server/virtual servers.

Re:DR (1)

sheldon (2322) | more than 8 years ago | (#15877371)

We use VMWare heavily at my company.

Disaster recovery, ability to move virtuals to new hardware... these are great positives.

The negatives are performance, performance and performance.

Backup of VMWare Server images (1)

tmasssey (546878) | more than 8 years ago | (#15877920)

If you use VMWare heavily, I'm sure you're running ESX, but I'll ask you anyway:

Can you (or anyone else) tell me the recommended way to back up your virtual machines with VMWare Server? All of the documentation I've found talks about ESX Server. They give you 2 choices: 1) Run backup software *inside* of the VM and back it up like any other machine, or 2) Back up the VM files directly. In the case of ESX, you use the Perl API to set up a redo log, but AFAIK that's not possible with Server. Without that, how would I be sure that the image is intact, and not in an unsuable state, especially when backing up multi-gigabyte files will take a while, all the while the VM might be making changes?

I know VMWare needs a carrot to get people to spend the big bucks, but it seems to me there should be *some* way of being able to back up VM images under Server without worrying about them changing while you're backing them up... Does anyone have any suggestions?

Re:Backup of VMWare Server images (1)

silas_moeckel (234313) | more than 8 years ago | (#15877998)

I think you need to look at ESX version 3 they have gotten snapshots working.

Re:Backup of VMWare Server images (4, Informative)

tadheckaman (578425) | more than 8 years ago | (#15878285)

Place the server in undo mode/snap shot mode, and then just backup the vmdk. When its placed into the undo/snap mode, it makes the vmdk readonly, writing the changes to a seperate file. Then all you need to do it copy that vmdk, and when done, commit the undo/snap. When restoring the backup, the system is brought online as if it lost power. On ESX its a snap to do, and Vizioncore makes software that does this for you (ESXRanger), however I leave the VMware Server as an exercise for the reader. As I dont have any need for this, I havent looked into actually scripting it in VMware Server. But the idea is the same, and I bet that its possible.
Doing a quick search on the forums, sounds like vmware-cmd is the tool to use, or write a script to talk to VMware's SDK.

Re:Backup of VMWare Server images (1)

sheldon (2322) | more than 8 years ago | (#15886772)

Well... at work we use VMWare ESX, but our disk is on Hitachi SAN... and so backups are performed by doing shadow copies at the disk level.

At home I use Virtual Server, and I've found running backups inside works.

The other method I've used is to pause the VM and then copy the files to a different location. On my little home machine with 4 virtuals and about 80 gigs of data this takes around 4 hours to complete. I have a script that does it for me, so each machine is maybe only offline for 30-45 minutes.

The point is, if you want 24 hour availability you pay for it and do what's necessary. If not, then you can probably survive a few hours of blackout periods to do backups.

what sort of virtualization? (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15877003)

When they talk "virtualization", do they mean running virtual domain service or virtual ips, allowing one computer to handle multiple web services, or do they mean running multiple virtual machines on one box?

I'm not too sanguine about the virtual machine approach. I suppose the only reason they'd be doing it that way is so that one server could seem like it's being multiple computers, so later the tasks can be split up if loads become high, or so that it looks absolutely identical to a two-machine setup that's being replaced. But while running virtual machines on x86-descended hardware has become much easier compared with only a few years ago, it still exacts a price in resources and performance, and may complicate administration. I'd strongly recommend using other techniques to load two tasks onto one machine.

For instance, let's say you want to run two SQL databases on one machine. The easiest solution is to partition the table namespace and run a single server. If the namespaces collide and there's no easy fix for it, you can run them on different ports and let iptables direct communications to one or the other based on ip.

I suppose there's one more reason for virtualization, and that's if you need to run multiple operating systems; but still IMO for future administration you're best off standardizing on one OS rather than trying to run two or more on the same box.

Re:what sort of virtualization? (1)

creimer (824291) | more than 8 years ago | (#15877214)

If I understand this correctly, instead of running four or five big applications on one physical computer, you give each application a virtual machine to run in on the same server. If one goes barf, the others are not affected. One article said a company went from three server farms to one server farm by running 225 VMs on 15 computers (or 15 VMs per computer).

Re:what sort of virtualization? (2, Informative)

Asgard (60200) | more than 8 years ago | (#15877758)

Its not so much one VM going bad, but that your application is totally self-contained on that VM, so you can move it (live, as with VMware ESX) to another hardware device with no worries about changing DNS, IPs, odd dependendencies in /usr/lib, etc.

Re:what sort of virtualization? (1)

dbIII (701233) | more than 8 years ago | (#15878006)

I suppose the only reason they'd be doing it that way is so that one server could seem like it's being multiple computers, so later the tasks can be split up if loads become high, or so that it looks absolutely identical to a two-machine setup that's being replaced
Bingo - the room full of NT machines each with a seperate task to replace one Sun machine can be replaced again by a single machine of your choice. Having a single machine to do nothing but DHCP and DNS for only sixty workstations always seemed like a waste - especially when you still had to reboot it once a week to cope with memory leaks that would crash it sometime between day 10 and 30. Now you can run that as a virtual machine on whatever environment you like and the memory leak will be contained. It can still be an NT machine - but one where the resources are managed to keep it all going - eg. allowing domain logins at normal speed while a CPU intensive email scanning task is going on in another virtual machine.

People will argue about redundancy - but virtual machines make that a lot easier too on a second bit of hardware.

Re:what sort of virtualization? (1)

TClevenger (252206) | more than 8 years ago | (#15882734)

Having a single machine to do nothing but DHCP and DNS for only sixty workstations always seemed like a waste - especially when you still had to reboot it once a week to cope with memory leaks that would crash it sometime between day 10 and 30. Now you can run that as a virtual machine on whatever environment you like and the memory leak will be contained.

More importantly, you can run two such instances, so one is always running while you're rebooting the other one.

Virtual Machines - and why you're wrong :-) (2, Informative)

billstewart (78916) | more than 8 years ago | (#15878094)

RTFA - it's about virtual machines, not virtual-domain web servers (which by now are old technology and an obvious win.) Yes, virtualization does take some extra resources, and you need a disciplined approach to administration to use them successfully in a production environment, but production environments already needed disciplined administration and enough resources - the assertion of the virtual-machine people is that it's actually easier than maintaining multiple boxes, especially given the extremely fast CPUs and cheap RAM available these days.

In a Unix environment, you can argue about whether the basic multi-user permissions environment and extra tricks like jails are enough to provide security in a multi-user multi-application market or whether it's helpful to use virtual machines as well. In a Windows environment, there's really not much question, even with XP-Pro and server versions - there's just not enough help from the OS. But even in a Unix environment, there are applications that want to use specific directories, or specific TCP and UDP port numbers, and virtualization lets you run multiple instances at the same time managed by different people. It also provides you some Least-Privilege-Principle separation of powers between your administrators - you can have one person who needs root to manage the firewall, but doesn't need to muck with the database, and somebody else who needs to control the database but doesn't need to touch the web servers.

For some applications, like virtual colo, virtualization environments really do rock, whether they're VMWare, UML, Xen, or whatever. I've seen people renting out virtual machines for ~$20/month or less, when physical colo costs would be $100, and it works fine (if there's enough cheap RAM) because usually you don't really need a big CPU full-time just to run an email server and web server or whatever.

Running multiple OS's at once is mainly useful in a desktop environment, or for specialized tasks like running an OpenBSD firewall, a Windows domain administration system, and a Linux general-purpose environment including web server and database all on the same box. I agree that it's usually cleaner to run everything in a single environment, even if it's multiple VMs - but there are times that the tools you want to use won't all run on the same OS.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?