Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: What Type of Asset Would You Not Virtualize?

samzenpus posted more than 2 years ago | from the keeping-it-real dept.

Virtualization 464

An anonymous reader writes "With IT and Data Center consolidation seemingly happening everywhere our small shop is about to receive a corporate mandate to follow suit and preferably accomplish this via virtualization. I've had success with virtualizing low load web servers and other assets but the larger project does intimidate me a little. So I'm wondering: Are there server types, applications and/or assets that I should be hesitant virtualizing today? Are there drawbacks that get glossed over in the rush to consolidate all assets?"

cancel ×

464 comments

Busy databases (4, Insightful)

DataDiddler (1994180) | more than 2 years ago | (#40174625)

Shared disk does not make I/O happy.

Re:Busy databases (4, Funny)

houstonbofh (602064) | more than 2 years ago | (#40174679)

Also, don't put the VM "control station" on the VM. You would think I wouldn't have to say this, but you would be wrong.

Re:Busy databases (2, Informative)

Anonymous Coward | more than 2 years ago | (#40174773)

If you refer to the VMware "vCenter" VM, you are wrong.
Virtualizing it gives you many advantages, the same ones you get from virtualizing any server. Decoupling it from the hardware, redundancy, etc.

Why would you NOT virtualize it ?

Just make sure you disable features that would move it around automatically so you can actually know on what host it's supposed to be running.

Re:Busy databases (4, Funny)

spazdor (902907) | more than 2 years ago | (#40174853)

'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.

Re:Busy databases (5, Informative)

NFN_NLN (633283) | more than 2 years ago | (#40174975)

'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.

In Oracle VM Server for x86 and VMWare vSphere (and probably most other virtualization platforms) the VMs run on hosts independent of the management platform, ie vCenter for vSphere.

vCenter is not considered critical for the operation of VMs. If vCenter dies your VMs will continue to run without interruption. You simply lose the ability to use advanced features such as vMotion, Storage vMotion, DRS, HA and vDS. However, you can still log into an ESXi host and start up another instance of vCenter. This is no different if the physical machine hosting vCenter died.

As far as I know, the upcoming highly available version of VMWare vCenter (heartbeat) which runs two instances of vCenter together is ONLY available in VM form, I don't know of a physical deployment for vCenter Heartbeat (but I could be wrong).

Re:Busy databases (0)

Anonymous Coward | more than 2 years ago | (#40174993)

Yeah sure connecting to a host by hand and rebooting a VM is super complicated.

Re:Busy databases (3, Informative)

Burning1 (204959) | more than 2 years ago | (#40175053)

This isn't really a problem. First, if you have a reasonable sized infrastructure, it makes sense to build a redundent vCenter instance... And IIRC, it may be clustered. Second, if you kill your vCenter instance, you can still connect directly to your ESXi hosts using the vSphere client. You'll still retain the ability to manage network connections, disks, access the console, etc.

Re:Busy databases (3, Insightful)

Electricity Likes Me (1098643) | more than 2 years ago | (#40175105)

You usually know you shouldn't mess with eth0 in that situation...but you do it anyway.

Re:Busy databases (1)

Anonymous Coward | more than 2 years ago | (#40174681)

You can create virtual machines with dedicated physical storage. Still offers the advantage of being able to just move the storage (as opposed to the VHD file), and spin the machine back up in a new server virtual server, if the host's hardware fails.

Re:Busy databases (2, Informative)

Anonymous Coward | more than 2 years ago | (#40174705)

In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

Re:Busy databases (2)

spazdor (902907) | more than 2 years ago | (#40174837)

Very likely, and this does mitigate things.

If the physical host has a lot of VM's using a lot of LUN's on the SAN, then there may still be contention for bandwidth on the fiberchannel card. Luckily this does not come with the massive overhead that is associated with contention for bandwidth on a local disk drive, but it's still a potential bottleneck to be wary of.

Re:Busy databases (3, Insightful)

The1stImmortal (1990110) | more than 2 years ago | (#40175035)

In which case the answer is either more fibre ports or changed storage design, no?

Re:Busy databases (2, Informative)

NFN_NLN (633283) | more than 2 years ago | (#40174887)

In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

In an enterprise environment all VMs (of any type) should be coming from external storage either SAN (FC, iSCSI) or NAS (NFS). Storage, Network and Hosts are usually separated into layers with full redundancy. No single point of failure should exist. Even if a host needs to be taken down for maintenance or upgrades etc the VM is migrated live to another host without interruption. Because the data is external it is accessible to all hosts and the hosts can be treated as a commodity item and swapped in/out.

Re:Busy databases (2)

NFN_NLN (633283) | more than 2 years ago | (#40174785)

Shared disk does not make I/O happy.

This was addressed during the VMWorld 2011 conference. VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform: Achieving a Million I/O Operations per Second from a Single VMware vSphere 5.0 Host
http://www.vmware.com/resources/techresources/10211 [vmware.com]

You can go with IBM/Power or Oracle/SPARC if you have exceptionally large systems, but if you're coming from x86 applications there are minimal CPU, Memory, IO limitations which can't be resolved. The only limitations for x86 virtualization are proprietary cards, clock skew and overhead (not an issue for 95% of cases).

In exchange you get hardware abstraction and portability, availability, the ability to scale much easier, consolidation, ease of deployment and management... etc etc.

If you aren't virtualized you are behind the times; that applies to RISC as well. Just make sure you architect a proper solution to handle the load you intend to run! And when you add more load, expand the infrastructure BEFORE you start having performance issues.

Re:Busy databases (1)

dave562 (969951) | more than 2 years ago | (#40174829)

On the other hand, you end up spending a lot of money for the perceived benefits of virtualization (hardware abstraction, portability, etc).

We virtualized SQL Server 2008 R2 and ended up going back to Microsoft clustering. With clustering we still get HA but do not have to pay for VMware licenses. On VMware we were dedicating entire hosts to a single guest due to the high RAM utilization. In addition we were also taking the virtualization hit on the resource level by abstracting out disk and CPU access.

Re:Busy databases (1)

Dynedain (141758) | more than 2 years ago | (#40175071)

That's a value proposition. Which costs more, the up front costs for virtualization, or the loss of business during downtime, and cost of emergency hardware migrations?

Clustering is a great solution, but most things that can be solved with clustering are probably not solved by virtualization. They're two different solutions for different kinds of reliability risks.

Re:Busy databases (2)

vux984 (928602) | more than 2 years ago | (#40174945)

The only limitations for x86 virtualization are proprietary cards...

And license dongles. Some work. Some don't. Worst is when they work "sometimes".

VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform...

Consolidating multiple low load servers ... say 9 physical low load servers onto 3 virtual hosts, there's tremendous value there. If one of the hosts goes down, you can even run 4/5 on the remaining two while you fix it... the 3 virtual hosts are cheaper than the 9 original servers, etc... win-win.

But high load servers? You can the advantages of virtualization out it... but you aren't saving money at the same time. If you have 3 servers that each run dual xeons at high utilization... if you want consolidate them onto a virtual then that host unit needs 6 xeons... and probably will cost more then the original 3 dual xeon servers combined...

Re:Busy databases (1)

aaronb1138 (2035478) | more than 2 years ago | (#40175093)

License dongle issues should be punted back onto the vendor of the software in question (repeatedly). It may not work the first time, but enough admins and their bosses raising hell with support and sales would hopefully push them to make their garbage compatible with ESXi, Xen, etcetera. USB pass-through compatibility is trivial and works for every consumer device using USB 1.1 and 2.0 standards. If they are giving you parallel or serial port dongles, then there are bigger problems with how the vendor does business.

Unfortunately, if you are at that level of software, you probably aren't in any position to fire the vendor and ditch their garbage. Best solution is the squeaky hinge gets the oil.

Re:Busy databases (1)

The1stImmortal (1990110) | more than 2 years ago | (#40175109)

The only limitations for x86 virtualization are proprietary cards, clock skew and overhead (not an issue for 95% of cases).

And the proprietary cards thing can often be worked around with PCI passthrough technologies these days.

Re:Busy databases (4, Interesting)

Maskirovka (255712) | more than 2 years ago | (#40174845)

Shared disk does not make I/O happy.

PCIE SSDs are advertised to deliver 100,000 to 500,000 IOPs. Has anyone experimented with PCI-Express based SSD solutions in their VM hosts to keep high IO VMs like VDI and SQL from swamping their SAN infrastructure?

http://www.oczenterprise.com/interfaces/pci-express.html [oczenterprise.com]

Re:Busy databases (1)

thekel (909848) | more than 2 years ago | (#40174941)

You need to differentiate between virtualization and multi-tenancy. Virtualizing databases works fine. You can slice a two socket system in half and bind a VM to each socket and you have doubled the tenancy without introducing multi-tenancy (sort of). If you give them separate disks you can maintain some isolation there as well. There is a nice win there because you are eliminating a lot of cross socket memory traffic. Scale up is pretty poor with many products. For a scale out database you can use one socket to host the DB and the other socket to host multiple lesser VMs. This will also let you get the node count of the DB up which is a good thing in many cases because it means less data per node. I think the multi-tenancy stuff where resources are shared is pure crap.

Re:Busy databases (5, Informative)

batkiwi (137781) | more than 2 years ago | (#40174965)

Virtualisation != shared disk IO.

If you're serious about virtualisation it's backed by a SAN anyways, which will get you many more IOPS than hitting a local disk ever would.

We virtualise almost everything now without issue by setting 0 contention. Our VM hosts are 40 core (4 socket) machines with 256GB ram. Need an 8 core VM with 64GB ram to run SQL Server? Fine.

We save BUCKETS on power, hardware, and licensing (Windows Server 2008 R2 datacenter licensing is per socket of the physical host) by virtualising.

Re:Busy databases (-1)

Anonymous Coward | more than 2 years ago | (#40174999)

The question I have for Obama is this: Who is stimulating the economy? Me, the guy who has provided 14 people good paying jobs and serves over 200,000 people per year with a flourishing business? Or, the single fat colored mammy sitting at home pregnant with her fourth child waiting for her next welfare check?

And as far as asset virtualization goes, I'm sure B. Hussein Obama doesn't give a rat's ass. For my part, I give asset virtualization two thumbs up.

First choice (5, Funny)

PPH (736903) | more than 2 years ago | (#40174633)

Virtualize management.

Re:First choice (0)

Anonymous Coward | more than 2 years ago | (#40174967)

Already done! There's no real management here at all.

Re:First choice (0)

Anonymous Coward | more than 2 years ago | (#40175013)

Read the title as First Choice from the "PFS: First Choice". Most folks here likely do not even know what that is.

Re:First choice (5, Funny)

aaronb1138 (2035478) | more than 2 years ago | (#40175117)

Already done. Most companies have hundreds of managers sharing the processing, memory, and storage facilities of one brain. Too bad the power and wasted space savings don't scale.

Not virtualize (5, Funny)

amicusNYCL (1538833) | more than 2 years ago | (#40174645)

Assets not to virtualize:

1) Women
2) Beer
3) Profit

fb, #3? (1)

jabberw0k (62554) | more than 2 years ago | (#40174721)

Do you sense a growing suspicion that Facebook has been counting virtual revenues...?

Re:Not virtualize (5, Funny)

theshowmecanuck (703852) | more than 2 years ago | (#40174833)

This is Slashdot. Your list is actually a priority list of thing to virtualize.

Re:Not virtualize (1)

countach (534280) | more than 2 years ago | (#40175051)

Also, iPads, iPhones, iPods, TVs.

cognos (2)

alen (225700) | more than 2 years ago | (#40174655)

ibm even says give it its own physical machine if you're going to virtualize it.

Re:cognos (2)

codepunk (167897) | more than 2 years ago | (#40174793)

That is because each box is running a java container requiring a terabyte of ram to render some html output.

Re:cognos (0)

Anonymous Coward | more than 2 years ago | (#40174987)

They do it badly , I run 5 WebSphere 7 and 1 WebSphere 8 applications servers on one relocatable virtual machine equipped with 4 GB of ram and 4 virtual cpus (yeah, you read it more server instance than CPU) as 6 Linux users (to get around licensing requirement) and the machine never it swap... Tune that garbage collector using the profiler builtin and read the god dammed fuck 1800 pages manuals two times.

survival assets (0)

Anonymous Coward | more than 2 years ago | (#40174659)

Food, cigarettes, and ammunition

Maybe the Cafeteria? (1)

icebike (68054) | more than 2 years ago | (#40174661)

The company cafeteria isn't all that great, but jeez nothing is less satisfying than a virtual burger and virtual fries.

Re:Maybe the Cafeteria? (0)

Anonymous Coward | more than 2 years ago | (#40174749)

virtualize the p.o.s. system!

Beh (1)

Hognoxious (631665) | more than 2 years ago | (#40174669)

Gold, houses, aircraft carriers, 17th century Dutch paintings.

To be serious for a moment... (3, Insightful)

danaris (525051) | more than 2 years ago | (#40174671)

How about backups?

Consolidating and virtualizing your backup servers sounds like a recipe for trouble to me.

Dan Aris

Re:To be serious for a moment... (1)

dave562 (969951) | more than 2 years ago | (#40174841)

We run Netbackup and we virtualized the master node, but the media servers are still physical.

Re:To be serious for a moment... (0)

Anonymous Coward | more than 2 years ago | (#40175049)

Media servers have to be bare metal. Netbackup and some others hate that virtualized layer.

Maybe in a few years but not today.

Re:To be serious for a moment... (1)

HockeyPuck (141947) | more than 2 years ago | (#40175047)

I run a large backup environment with Tivoli on IBM pSeries. We carve the pSeries up into multiple LPARs which write to a physical library which is logically carved up. I have to separate the backup environments due to regulatory issues and virtualizing both the backup servers and the library makes things much easier for me. I can set up between 4-8 LPARs per virtual library, and given the horsepower of the pSeries that I'm using, I don't have a ton of physical servers to manage.

Re:To be serious for a moment... (0)

Anonymous Coward | more than 2 years ago | (#40175171)

I run 5-10 VM's per machine.
1x Areca 1880 sata/sas controller (WAY faster than intel built in garbage, 4GB cache if you can afford it)
4x 1TB Disks Raid 10 to keep the OS and data
1x 1TB Disk Hotspare in case it's not your lucky day
1x -3TB Passthrough disk with folders inside with the name of each server shared over the network.
All VM's backup to their own folder on the host machine so there's no real network usage.

I've got 5 SuperMicro Corei7 servers running rock solid this way.

Almost everything (0)

Anonymous Coward | more than 2 years ago | (#40174685)

My organization has virtualized almost every workload you can think of including Exchange, our ERP system, our big HR system, and other huge enterprise applications on top of VMware vSphere.

The only things we do not virtualize at this point are our really big Oracle databases, but those are in effectively virtualized on the Oracle RAC platform anyway.

I tend to hear people in smaller shops who don't really have that many users being more fearful of virtualization which is the opposite of what you'd expect.

Re:Almost everything (1)

Anonymous Coward | more than 2 years ago | (#40174991)

The small shops are probably more fearful of virtualization, because they're small. A bigger company has a better chance of recovering from a screw up than a small one during the transition. I might be completely wrong, but that's my take on it.

--wmbetts

What Type of Asset Would You Not Virtualize? (0)

Anonymous Coward | more than 2 years ago | (#40174689)

Yo Mama.

Sure (4, Funny)

FranTaylor (164577) | more than 2 years ago | (#40174693)

I would not virtualize the servers that are running the virtual machines.

Re:Sure (1)

Anonymous Coward | more than 2 years ago | (#40174949)

well, maybe not in production. In test, dev and training those are fair game too.

Re:Sure (3, Interesting)

The1stImmortal (1990110) | more than 2 years ago | (#40174959)

VMware ESXi is actually a supported guest for VMware Workstation...

Whilst that may sound crazy, it makes system design, testing, and generally skilling up a lot easier.

Why? (0)

oldhack (1037484) | more than 2 years ago | (#40174697)

Why you wanna virtualize an asshat?

Re:Why? (0)

Anonymous Coward | more than 2 years ago | (#40174875)

Asset, not asshat.

Databases and Heavy memory java apps (2, Informative)

codepunk (167897) | more than 2 years ago | (#40174753)

Well yes Databases would make a poor virtualization target. Also your heavy memory usage java app like the company app server using a terabyte of ram to display the department wiki.

Re:Databases and Heavy memory java apps (0)

Anonymous Coward | more than 2 years ago | (#40174809)

Databases. Why?

I've seen this countless times. Yet I've seen countless database servers running perfectly virtualized. Of course if your server needs 100% of the power of the host, it might not make sense, but otherwise..

Re:Databases and Heavy memory java apps (1)

codepunk (167897) | more than 2 years ago | (#40174865)

Yes your little mysql company home page wordpress site it just fine to run virtualized. I am talking about enterprise databases.

Re:Databases and Heavy memory java apps (1)

wmbetts (1306001) | more than 2 years ago | (#40175097)

I've always had the same belief based on experience that databases shouldn't be virtualized. However, recently I was thinking about experimenting with adding in drives dedicated to the database and only the database. I'm not a hardware guy and it's been years since I was a real sys admin so my thinking may be completely off. Would that be feasible? I'm not talking about a little database for a wordpress blog either. I'm talking about a database with hundreds of millions of rows in several different tables.

Re:Databases and Heavy memory java apps (1)

silas_moeckel (234313) | more than 2 years ago | (#40175175)

Your take a pretty good hit on IO in VM and your average database server will use as much ram as you can throw at it. This means it's rarely a good idea to VM production DB servers.

Re:Databases and Heavy memory java apps (1)

FranTaylor (164577) | more than 2 years ago | (#40174877)

It's not the database, it's the application

a database with strange performance variations can draw out race conditions in poorly written applications

If the virtualized database takes too long to respond and an internal error happens, that can create quite a mess.

Much better to have a database server with guaranteed response time.

Plenty (1)

rwven (663186) | more than 2 years ago | (#40174757)

Seems like a front-end serving statically cached content is a great match for Virtualization. DB servers and search servers (Solr, etc) aren't a good match imho, unless you have a very well implemented sharding/horizontal scaling solution. If you pre-generate your content, we've typically used hard boxes for those as well, but you may benefit from virtualizing those if you want to easily scale horizontally (assuming you have the hypervisor overhead).

render farms (0)

Anonymous Coward | more than 2 years ago | (#40174759)

I wouldn't virtualize compute clusters: medical, bio, etc....I use render farms for images and VM's don't really help with performance.

Management is another topic, but that can be accomplished without VM's.

-nick

Re:render farms (0)

Anonymous Coward | more than 2 years ago | (#40174805)

funny you say that when folks such as Dreamworks render movies using grids of both physical and virtual machines.

Services that vCenter needs (1)

Anonymous Coward | more than 2 years ago | (#40174765)

Having DNS virtualized can make it hard to start vCenter. Also, you probably don't ant to virtualize your vCenter SQL backend.

Anything with strict timing constraints (5, Insightful)

Anonymous Coward | more than 2 years ago | (#40174771)

Don't virtualize anything requiring tight scheduling or a reliable clock, such as a software PBX system performing transcoding or conferencing.

http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf

Security (1)

machine321 (458769) | more than 2 years ago | (#40174781)

Pretty much anything can be successfully virtualized if you throw enough hardware at the host. Just keep in mind that these machines are all actually running on the same processors, and there's probably going to be a way to escalate rights from VM to host or VM to VM. In your environment this may not be an issue, but it's worth keeping in mind.

A virtualization host (0)

Anonymous Coward | more than 2 years ago | (#40174783)

That's for sure... for now!

Remote site workgroup servers (1)

Anonymous Coward | more than 2 years ago | (#40174789)

Yes, having the branch servers (domain/user profile) centralized saves on server hardware and maint, but that gets eaten up the first or second time a subcontractor has to wait 6 hours for a profile to load over a satellite link. I know. I'm that contractor.

Re:Remote site workgroup servers (0)

Anonymous Coward | more than 2 years ago | (#40174955)

It's not virtualization that is the problem, its the fact that you are working with toy boxes that require a UI.

Floating point Maths (0)

Anonymous Coward | more than 2 years ago | (#40174811)

Any heavy number-crunching will suffer from being virtualized.

Nope (1)

FranTaylor (164577) | more than 2 years ago | (#40174939)

On the contrary, these instructions are run directly by the host processor at full speed.

VMware emulates the rest of the computer, but it's just time-slicing the CPU to the emulator, so pure CPU runs more or less at full speed.

Please note in your virtual machines that the CPU is not virtualized, it is the same make and model as your actual hardware.

It's I/O that suffers with virtualization, because the I/O devices have to be emulated.

Re:Floating point Maths (1)

The1stImmortal (1990110) | more than 2 years ago | (#40175025)

Any heavy number-crunching will suffer from being virtualized.

I've not seen any hard numbers (that I can remember) but given userspace code runs directly on the CPU, with hardware-assisted memory translation, in most modern hypervisors - I'm not sure why this would be the case?

Non x86 HW/Software or Certified systems (2)

thb3 (19142) | more than 2 years ago | (#40174813)

The only workloads that you can't really virtualize tend to be things like OS/400, but that is where things like LPARs can come in OR workloads that do a lot of privileges calls to the CPU or a specific instruction set. Also there are a slew of non-technical reasons I've seen like in Healthcare or Pharma where a specific machine is written into a specification for drug manufacturing or such.

Even still there really aren't any workloads you can't virtualize and realize some sort of benefit from. Even those with high requirements for CPUs or Memory hogs can be virtualized. It's not always the best financial decision from the hardware cost and licenses, but most organizations benefit then from the ability to move the VM from one physical host to another to avoid downtime and easier backups. Having worked on a lot of the largest VMware deployments out there, I've yet to find an application we can't virtualize for some gain in performance (newer hardware) or get better resiliency (less downtime, easier backups, HA features outside of the OS).

A few types I'd refrain from virtualizing (2, Informative)

Anonymous Coward | more than 2 years ago | (#40174823)

- Telephony and rea-time voice/video servers (Asterisk for example). You don't want to explain to your big boss why his phone lines are having hiccups
- Real-time servers, like game servers (Minecraft), as they constantly take up resources.
- NTPD servers (Network Time Protocol). Worst case, run it at layer 0 (host machine). But not VM :) Preferable to have an old Pentium running that one somewhere.

If you look at the pattern, these are all real-time services that have very little leeway for latency. Yes, it's possible to virtualize them, but it's more work than it's worth, really.

For me, IMHO, everything else is fair play. Had good luck with Windows Active Directory servers (yes even PDC), SVN, web services, file servers, databases, memcache, and many others.

High Performance Clusters (2)

Tynin (634655) | more than 2 years ago | (#40174825)

Makes little sense to not run on metal when using an HPC. I can understand the benefit of being able to better utilize the hardware you have, as well as the potential lessening of your datacenter footprint in space, cooling, electric, etc. but when you are dependent on having quick(and ever quicker) turn around times because of business needs, it hasn't been my experience that the cloud makes sense, at least in production environments. Granted, for Dev & QA HPCs, go for it, but not for production.

Do most things (1)

Anonymous Coward | more than 2 years ago | (#40174847)

Most things you can virtualize, even if you only have the 1 VM on the machine it does allow for easier migration between physical hardware (especially if Windows based).

Heavy I/O tasks you might not want to, if you do, local/DAS dedicated storage is preferred, if you virtualize these, it quite possible you want to have 1 VM to the 1 Physical machine, it doesn't save you space but as stated above can make things easier for migration between hardware.

Management and Monitoring you can virtualize but do so with local storage, not network based (iSCSI, AoE, NFS). If you must use shared storage make sure you have 2 on seperate arrays.

The backup server is OK to virtualize, but we keep the backups on local storage and tape, not on the shared array - the reason is the same as with the managment and monitoring not being on the shared storage array. If it goes down so you lose all machines operating off that array.

Virtualize ALL THE THINGS (1)

brenddie (897982) | more than 2 years ago | (#40174851)

except BIG backend databases and keep your DBs separate from any disk groups used for virtualization

Re:Virtualize ALL THE THINGS (1)

oneiros27 (46144) | more than 2 years ago | (#40175189)

Agreed on the datbases ... although I've heard some interesting ideas w/ using database disks for backups of other systems.

Basically, you spread your database across the inner 10% of the disks ... then use the other 90% for your backups of other systems. When the databases aren't at peak, you run the backups.

This way, you spread the database across 10x the number of spindles.

You could probably back up the database itself to the disks, but you'll want some logic to make sure there's more than one disk group, so you don't back up a given partition back to the same disk.

Time server? (1)

whoever57 (658626) | more than 2 years ago | (#40174871)

I have seen the time on virtual machines hopping around -- even those that are running ntpd.

Re:Time server? (1)

msk (6205) | more than 2 years ago | (#40174983)

VMware? There are settings at the ESX layer to prevent the VMware Tools from fighting with ntpd.

Re:Time server? (0)

Anonymous Coward | more than 2 years ago | (#40175033)

or... ntp.conf: add this line: "tinker panic 0"

Suggestion (1)

Anonymous Coward | more than 2 years ago | (#40174873)

My suggestion, based on past experience, is to check with your third party vendors and make sure they will continue to support their software in a virtualized environment. I've run into a few who basically said that if their product wasn't running on real hardware than they wouldn't support it any longer. It doesn't happen often, but check just to make sure.

Anything with significant requirements. (1)

Anonymous Coward | more than 2 years ago | (#40174937)

Let's take this one step at a time. What does virtualisation do? Well, it lets you split up one big, beefy system into a large number of smaller, not-quite-so-beefy systems. In the process, I/O becomes essentially random (because while host A may be reading sequentially from its disk, and host B may be reading sequentially from its disk, and host C may be reading sequentially from its disk, that means that the disk - if those virtual disks are coming from the same spindle - is jumping from A to B to C to A to B to C to A to B to C, and performance is going to go through the floor. Remember this story [slashdot.org] ? Yeah, that's exactly what I'm talking about.)

So anything that has high CPU requirements, or high I/O throughput requirements, probably shouldn't be virtualised. Anything with high RAM requirements might be a candidate for virtualisation alongside low-RAM applications, if the CPU and I/O needs are low, as long as the physical machines running the virtual hosts have an imperial buttload (that's the technical term - the metric buttload is smaller) of RAM. Everybody's situation is different; you'll know this better than anybody on slashdot.

Anything but sandbox test environments (0)

Anonymous Coward | more than 2 years ago | (#40174951)

Have worked with some of the largest virtual environments in the world for several years.

My personal primary rule is: Don't virtualize anything important that has to have high availability. Virtual environments are great until they blow up spectacularly.

Database. (0)

Anonymous Coward | more than 2 years ago | (#40174961)

ACID compliant databases require a fast disk subsystem. Don't shirk on those.

That said, we do have an asynchronously replicated hot-standby that's a VM, but that's just so we can snapshot the thing and get a simple-sick backup / recovery / DR solution.

Worst case scenario, the fail-over site in Pittsburgh will spin up the VM and at least be running. Maybe not top-notch, but business will continue -- and that's the important part.

Meanwhile, at our primary DC, we get wicked-slick performance on real hardware for our DB's.

I Wish (0)

Anonymous Coward | more than 2 years ago | (#40174969)

I wish I could virtualize our continuous integration build system and get rid of all the machines, but it needs to run unit and performance tests that rely on specific hardware like different types of GPUs, CPUs and peripheral devices.

build/compile server (1)

supermonkeycool (641966) | more than 2 years ago | (#40175005)

our physical build boxes destroy the VMs in the build farm. not even close.

Obvious choice (1)

mark-t (151149) | more than 2 years ago | (#40175037)

Food and water.

Heavily threaded things like databases (1)

metoc (224422) | more than 2 years ago | (#40175043)

Most database servers are already doing the same things that virtualization accomplishes. SQL Server 2012 as an example can support multiple database instances, each with multiple databases and will use every last resource available, and be more efficient than hosting multiple copies of in their own OS instance in VMWare.

Authentication (1)

jonxor (1841382) | more than 2 years ago | (#40175055)

Along with virtualization management, do not virtualize whatever system provides authentication to allow you to manage the VM's (domain controllers in the case of hyper-v). Other than for hardware requirements (PBX, phone interfaces,anything requiring proprietary hardware) everything else is fair game. The list of hardware that can not be accessed by a VM is getting shorter, as virtualization giants have started to support giving VM's some GPU time on the host, as well as access to USB devices.

Re:Authentication (0)

Anonymous Coward | more than 2 years ago | (#40175125)

Just NEVER join host machines to a domain. Therefore you don't care if the domain controller is virtualized or not, it always powers on and nothing keeps a machine running better than running nothing on it (besides virtualizing software of course).

Depends on the backend systems (0)

Anonymous Coward | more than 2 years ago | (#40175061)

There's very little I would not virtualize however, you must size your supporting hardware correctly. I have no problem virtualizing large scale databases but if I need 8 quad core processors to support the workload then I have to ask, why virtualize. There are reasons besides consolidation, such as mobility, recoverability, and ease of dynnamicaly expanding and contracting allocated resources as needed. I HIGHLY recommend engaging a professional services company at least to help with the design.

Network Gear (3, Informative)

GeneralTurgidson (2464452) | more than 2 years ago | (#40175063)

Firewalls, load balancers, anything that is typically hardware with dedicated ASICs. These virtual appliances typically are very taxing to your virtual infrastructure and cost about the same as the physical box. Also, iSeries, but I'm pretty sure I could do it if our admin went on vacation for once :)

gotta have a night light server (5, Informative)

jakedata (585566) | more than 2 years ago | (#40175099)

Imagine coming up from a stone cold shutdown. What would be a super thing to have? How about DNS and DHCP? AD too if that's your thing. Some nice little box that can wake up your LAN in 5 minutes so you can start troubleshooting the boot storm as the rest of your VMs try to start up and all get stuck in the doorway like the Three Stooges.

Exchange (0)

Anonymous Coward | more than 2 years ago | (#40175101)

I've done it on modern hardware. it's WAY too slow, and gets to the point that backups take longer than 24hrs when the database grows. Then it gets very disruptive. To manage I offloaded the mailbox store role to a real machine running a second copy. Left the first one only checking for spam and as a client hub that outlook machines and the outside world connect to.

Heavy Network I/O (1)

zshalla (1270956) | more than 2 years ago | (#40175115)

Acquiring and using sockets is high overhead for the VM and can become a bottleneck. We ran an experiment on virtualizing our cache servers, and with fairly identical specs the real machines had 25-50% better throughput when the machine was under load.

There are different types of virtualization (1)

Karmashock (2415832) | more than 2 years ago | (#40175119)

If you're going to virtualize something that gets a lot of traffic then it makes sense to scale up the server and environment.

If you're talking about virtualizing an enterprise scale server/server farm then you'll want a solution that is designed to handle that sort of situation.

As some people said, shared disk doesn't make I/O happy. That's a key point which is dealt with in enterprise scale virtualization by spreading the load across many different systems. So the hit of shared load is mitigated by access to multiple systems with redundant information. There are some very cool products that do this sort of thing very well.

But generally it's a bad idea to shoe horn an enterprise system into a limited virtualized environment where performance will suffer.

You don't want to virtualize unless you're consolidating servers. The costs just don't make sense. Where you save the money is when you get three servers to do the work of 10... or 10 to do the work of 100.

Sex partner (1, Funny)

kawabago (551139) | more than 2 years ago | (#40175133)

Virtual sex is just not the same.

HPC (1)

Turmoyl (958221) | more than 2 years ago | (#40175143)

You would hurt things by attempting to virtualize an HPC (High Performance Computing) environment. It's all about raw horsepower, and you sacrifice a chunk of that in a VM environment.

Depends on your expected ROI (4, Informative)

kolbe (320366) | more than 2 years ago | (#40175147)

Depending on the environment and the available assets to the IT Department.

As an example:

Assume you have VMWare ESXi 5 running on 3 hosts with a vCenter and a combined pool of say 192GB of RAM, 5TB of disk, 3x1Gbps for NAS/SAN/iSCSI and 3x1Gbps for Data/connectivity.

It would become unwise in such an environment (without funds to expand it) to run any system that causes a bottleneck on your environment and thus decrease performance for other systems. This can be:
- Systems with High Disk load such as heavy DB usage or SNMP Traps or Log collection or Backup Storage Servers;
- Systems with High Network usage such as SNMP, Streaming services or E-mail;
- Systems with High RAM usage.

For this example, any of the above utilizing say 15% of your total resources for a single instance server would ultimately become cheaper to run on physical hardware. That is, until your environment can bring that utilization number down to 5% or is warranted/needed/desired for some reason.

In my environment, we have a total of 15 ESXi v5 hosts on Cisco UCS Blades with 1TB of RAM and 30TB of Disk on 10GbE. We do however refrain from deploying:
- Media Streaming servers
- Backup Servers
- SNMP/Log Collection Servers

Hope this helps!

Vm hosts (0)

Anonymous Coward | more than 2 years ago | (#40175149)

Virtual machine hosts. We put a virtual machine in your virtual machine so you could virtualize while you are virtualizing.

Virtualize everything (1)

realmolo (574068) | more than 2 years ago | (#40175161)

The advantages of virtualization are too great to not do it whenever possible.

The only limiting factor is, really, is how much money you have to spend on your virtualization infrastructure. VMWare's licensing got a little nutty, and SAN storage got really pricy last year.

But it's worth it. Once you have a nice VMWare cluster running, SO many things become easier. And some things that were damn near impossible before become simple.

That said, you probably want to keep at least one domain controller and one DNS server as physical boxes. Makes maintenance of the VMs easier, if you ever have to reboot the whole thing.

virtualize across more than one host (1)

magarity (164372) | more than 2 years ago | (#40175177)

Just don't virtualize everything into a single host. Have multiple hosts and set the virtualization management to fail over. Otherwise losing one server means losing all the servers. Then only make enough VMs so that if one host failed, things would just run slightly annoyingly slow on the one picking up the load until the problem is fixed. Of course, don't let the annoyingly slow happen to anything mission critical with tight response requirements no matter what.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...