Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Virtualization

Ask Slashdot: What Type of Asset Would You Not Virtualize? 464

An anonymous reader writes "With IT and Data Center consolidation seemingly happening everywhere our small shop is about to receive a corporate mandate to follow suit and preferably accomplish this via virtualization. I've had success with virtualizing low load web servers and other assets but the larger project does intimidate me a little. So I'm wondering: Are there server types, applications and/or assets that I should be hesitant virtualizing today? Are there drawbacks that get glossed over in the rush to consolidate all assets?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Type of Asset Would You Not Virtualize?

Comments Filter:
  • Busy databases (Score:5, Insightful)

    by DataDiddler ( 1994180 ) on Thursday May 31, 2012 @08:04PM (#40174625)
    Shared disk does not make I/O happy.
    • by houstonbofh ( 602064 ) on Thursday May 31, 2012 @08:10PM (#40174679)
      Also, don't put the VM "control station" on the VM. You would think I wouldn't have to say this, but you would be wrong.
      • Re: (Score:2, Informative)

        by Anonymous Coward

        If you refer to the VMware "vCenter" VM, you are wrong.
        Virtualizing it gives you many advantages, the same ones you get from virtualizing any server. Decoupling it from the hardware, redundancy, etc.

        Why would you NOT virtualize it ?

        Just make sure you disable features that would move it around automatically so you can actually know on what host it's supposed to be running.

        • by spazdor ( 902907 ) on Thursday May 31, 2012 @08:26PM (#40174853)

          'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

          Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.

          • Re:Busy databases (Score:5, Informative)

            by NFN_NLN ( 633283 ) on Thursday May 31, 2012 @08:38PM (#40174975)

            'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

            Kind of like how it's a bad idea to mess with a host's eth0 settings if you're currently logged in via ssh through eth0.

            In Oracle VM Server for x86 and VMWare vSphere (and probably most other virtualization platforms) the VMs run on hosts independent of the management platform, ie vCenter for vSphere.

            vCenter is not considered critical for the operation of VMs. If vCenter dies your VMs will continue to run without interruption. You simply lose the ability to use advanced features such as vMotion, Storage vMotion, DRS, HA and vDS. However, you can still log into an ESXi host and start up another instance of vCenter. This is no different if the physical machine hosting vCenter died.

            As far as I know, the upcoming highly available version of VMWare vCenter (heartbeat) which runs two instances of vCenter together is ONLY available in VM form, I don't know of a physical deployment for vCenter Heartbeat (but I could be wrong).

            • We're running vCenter on a pair of physical (non-VM) servers with heartbeat. Heartbeat is a huge pain to get working and apparently pretty much requires Windows Active Directory and MS SQL (we would have preferred Oracle since we already had that in place, but our VMWare support folks couldn't get the combination of vCenter, Heartbeat and Oracle working together.)

          • Re:Busy databases (Score:4, Informative)

            by Burning1 ( 204959 ) on Thursday May 31, 2012 @08:48PM (#40175053) Homepage

            This isn't really a problem. First, if you have a reasonable sized infrastructure, it makes sense to build a redundent vCenter instance... And IIRC, it may be clustered. Second, if you kill your vCenter instance, you can still connect directly to your ESXi hosts using the vSphere client. You'll still retain the ability to manage network connections, disks, access the console, etc.

          • Re:Busy databases (Score:4, Insightful)

            by Electricity Likes Me ( 1098643 ) on Thursday May 31, 2012 @08:54PM (#40175105)

            You usually know you shouldn't mess with eth0 in that situation...but you do it anyway.

          • For situations where changes might cause loss of easy SSH access, It's not difficult to preserve the original networking config, fire up a screen session, issue a networking config restore/restart command followed by a sleep of $seconds, make your changes, restart networking, and terminate the screen session if everything worked out okay. I'm sure you thought of that, though. Right? I'm also sure you have out of band console access to servers that matter to you, just in case something goes wrong. Right?
          • Re:Busy databases (Score:5, Informative)

            by mysidia ( 191772 ) on Thursday May 31, 2012 @09:36PM (#40175485)

            'cause if you knock it offline by accident, your easiest tool with which to bring it back online is gone?

            However, your vCenter is much more likely to fail hard if you place it on a separate physical server.
            Physical servers fail all the time. By virtualizing vCenter, what you accomplish is that you protect it using HA; if one of your VM hosts fails, you can have two hosts standing by to start the VM.

            You can also use HA to protect the SQL server that vCenter requires to work, and you can establish DRS rules to indicate which host(s) you prefer those two to live on.

            If some operator erroneously powers this off, or mucks it up, there is still an easily available tool, the VPX Client; vSphere Client, can be used to connect directly to a host and start the VM.

            You can also have a separate physical host with the vSphere CLI installed, to allow you to power on a VM that way. It does make sense to have perhaps 2 cheap physical servers to run such things, and maybe double as a DNS or AD server, to avoid circular dependency problems with DNS or authentication; these would also be the servers your backup tools should be installed on, to facilitate recovery of VM(s) (including the vCenter VM) from backup.

            That's fine and valid, but vCenter itself should be clustered. Unless you are paying through the nose for vCenter Heartbeat, running it as a VM is the best common supported practice for accomplishing that.

        • For some reason you guys have reminded me of all the calls I used to get where people had moved their windows swap file to a ramdisk and they couldn't figure out why their systems destabilized and spit up digital hairballs when the machine was under a heavy work load.
        • Comment removed based on user account deletion
      • Re:Busy databases (Score:4, Informative)

        by vanyel ( 28049 ) * on Thursday May 31, 2012 @09:07PM (#40175229) Journal

        A corollary to this is to make sure you have a local physical nameserver configured in all of your systems. Basically, go through a cold start power up sequence and figure out what you need in what order to get things back online. Testing the resulting procedure would be good idea too ;-)

      • Re:Busy databases (Score:5, Informative)

        by mysidia ( 191772 ) on Thursday May 31, 2012 @09:27PM (#40175383)

        It's just fine to do that. However, a few things are important:

        (1) You need at least 1 point of access to the environment at all times -- e.g. You need a "backup way" in that works and gives you full control, even if for some reason no VMs are running (worst case).

        (2) You need to ensure there are no circular dependencies -- if all VMs are down, your environment is still sufficiently operational for you to correct any issue. An example of a circular dependency would be that you have virtualized a VPN server/firewall required to gain access to your ESXi hosts; yeah, it's secure from an integrity standpoint, but what about secure from an availability standpoint, and secure from a disaster recovery standpoint?

        (3) If you have virtualized your active directory servers, you should ensure you have a backup plan that will allow you to authenticate to all your hypervisors, virtualization management, and network infrastructure/out of band management, EVEN if AD is crippled and offline.

        (4) If you have virtualized DNS servers, you should have at least 2 DNS servers that will probably not fail at the same time, because you have eliminated as many common failure modes as possible.

        (a) Redundant DNS servers should not be on the same server. Ideally you would have two sites, with separate virtualization clusters, separate storage, and 2 redundant DNS servers at each site.
        (b) Redundant DNS servers should not be stored on the same SAN, or the same local storage medium.
        (c) If your Hypervisor requires a DNS lookup to gain access to the SAN, you should have a backup plan to get your environment back up when all DNS servers are down. Each virtualized DNS server should be stored either on separate DAS storage, or on shared storage that can be accessed even when DNS is unavailable.

        • by vlm ( 69642 )

          An example of a circular dependency would be that you have virtualized a VPN server/firewall required to gain access to your ESXi hosts; yeah, it's secure from an integrity standpoint, but what about secure from an availability standpoint, and secure from a disaster recovery standpoint?

          Simpler example, all your virtualization hosts get their address from a mac addrs hard coded DHCP server... And the DHCP server is virtualized. I almost accidentally did this one.

      • Re:Busy databases (Score:4, Interesting)

        by houstonbofh ( 602064 ) on Thursday May 31, 2012 @10:41PM (#40175857)
        I posted this and everyone went "Vcenter!" But it could also be Vsphere (which was the case) or Xen management tools, or KVM management tools... In other words, don't make it easy to lock your keys in the car...
    • Re: (Score:2, Informative)

      by Anonymous Coward

      In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

      • by spazdor ( 902907 )

        Very likely, and this does mitigate things.

        If the physical host has a lot of VM's using a lot of LUN's on the SAN, then there may still be contention for bandwidth on the fiberchannel card. Luckily this does not come with the massive overhead that is associated with contention for bandwidth on a local disk drive, but it's still a potential bottleneck to be wary of.

      • Re: (Score:3, Informative)

        by NFN_NLN ( 633283 )

        In enterprise, aren't most busy DB servers using storage on the SAN, which would be exactly the same place where it would be if the server was virtualized?

        In an enterprise environment all VMs (of any type) should be coming from external storage either SAN (FC, iSCSI) or NAS (NFS). Storage, Network and Hosts are usually separated into layers with full redundancy. No single point of failure should exist. Even if a host needs to be taken down for maintenance or upgrades etc the VM is migrated live to another host without interruption. Because the data is external it is accessible to all hosts and the hosts can be treated as a commodity item and swapped in/ou

      • I/O is not a valid argument against not virtualizing something. Proper architecture will allow you to control your I/O and guarantee the proper disk I/O regardless of your workload being virtualized or physical.

        The problem is when you take and virtualize without a thought towards optimizing the hardware to ensure that you don't cause problems for yourself later on down the road. Now that said I don't virtualize database servers in prod (I do in dev/test - but that is different), however this has nothing
        • As long as you think there's no difference between terrorists and hippies, I'm not interested in anything else you say.

    • by NFN_NLN ( 633283 )

      Shared disk does not make I/O happy.

      This was addressed during the VMWorld 2011 conference. VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform: Achieving a Million I/O Operations per Second from a Single VMware vSphere 5.0 Host
      http://www.vmware.com/resources/techresources/10211 [vmware.com]

      You can go with IBM/Power or Oracle/SPARC if you have exceptionally large systems, but if you're coming from x86 applications there are minimal CPU, Memory, IO limitations which can't be resolved. The only limitations for

      • by dave562 ( 969951 )

        On the other hand, you end up spending a lot of money for the perceived benefits of virtualization (hardware abstraction, portability, etc).

        We virtualized SQL Server 2008 R2 and ended up going back to Microsoft clustering. With clustering we still get HA but do not have to pay for VMware licenses. On VMware we were dedicating entire hosts to a single guest due to the high RAM utilization. In addition we were also taking the virtualization hit on the resource level by abstracting out disk and CPU access.

        • That's a value proposition. Which costs more, the up front costs for virtualization, or the loss of business during downtime, and cost of emergency hardware migrations?

          Clustering is a great solution, but most things that can be solved with clustering are probably not solved by virtualization. They're two different solutions for different kinds of reliability risks.

      • by vux984 ( 928602 )

        The only limitations for x86 virtualization are proprietary cards...

        And license dongles. Some work. Some don't. Worst is when they work "sometimes".

        VMWare is only limited by the amount of hardware you throw at it just like any other x86 platform...

        Consolidating multiple low load servers ... say 9 physical low load servers onto 3 virtual hosts, there's tremendous value there. If one of the hosts goes down, you can even run 4/5 on the remaining two while you fix it... the 3 virtual hosts are cheaper than the

        • License dongle issues should be punted back onto the vendor of the software in question (repeatedly). It may not work the first time, but enough admins and their bosses raising hell with support and sales would hopefully push them to make their garbage compatible with ESXi, Xen, etcetera. USB pass-through compatibility is trivial and works for every consumer device using USB 1.1 and 2.0 standards. If they are giving you parallel or serial port dongles, then there are bigger problems with how the vendor d

    • Re:Busy databases (Score:5, Interesting)

      by Maskirovka ( 255712 ) on Thursday May 31, 2012 @08:25PM (#40174845)

      Shared disk does not make I/O happy.

      PCIE SSDs are advertised to deliver 100,000 to 500,000 IOPs. Has anyone experimented with PCI-Express based SSD solutions in their VM hosts to keep high IO VMs like VDI and SQL from swamping their SAN infrastructure?

      http://www.oczenterprise.com/interfaces/pci-express.html [oczenterprise.com]

      • The argument against PCIe is that if anything on that particular server fries, all of your data is now trapped inside a dead server; there's no way to fail over quickly. If you care about high availability you probably want to look into a SAN flash appliance, i.e. Pure Storage [purestorage.com]. (shameless plug)
    • Re:Busy databases (Score:5, Informative)

      by batkiwi ( 137781 ) on Thursday May 31, 2012 @08:37PM (#40174965)

      Virtualisation != shared disk IO.

      If you're serious about virtualisation it's backed by a SAN anyways, which will get you many more IOPS than hitting a local disk ever would.

      We virtualise almost everything now without issue by setting 0 contention. Our VM hosts are 40 core (4 socket) machines with 256GB ram. Need an 8 core VM with 64GB ram to run SQL Server? Fine.

      We save BUCKETS on power, hardware, and licensing (Windows Server 2008 R2 datacenter licensing is per socket of the physical host) by virtualising.

      • by ppanon ( 16583 )

        Database I/Os tend to use random access patterns much more than other I/O workloads, such that latency is frequently the key performance factor, not throughput. SANs can give you lots of IOPS when most of those IOPS are substantially sequential and can take advantage of a large stripe block being read into a cache from multiple drives at once. However if each IO requires a head seek, a SAN's performance won't be substantially better than local disk and, when a DB's IO requests are queued with other VM IO re

        • Re:Busy databases (Score:5, Insightful)

          by batkiwi ( 137781 ) on Thursday May 31, 2012 @09:29PM (#40175401)

          One of the systems I manage has a 1.3TB ms sql server database. It absolutely flies.

          The same SAN also hosts a few 8-10TB oracle databases with no issues.

          What idiot shares spindle sets on a VM DB setup? OS goes to the shared pool, each DB gets its own set of LUNs depending on performance needs. This isn't rocket surgery.

          • Re:Busy databases (Score:4, Interesting)

            by swb ( 14022 ) on Thursday May 31, 2012 @10:06PM (#40175657)

            At most places you don't get to buy SAN often enough or large enough to have the luxury of allocating a dozen spindles to three RAID-10 sets. Eventually management's store everything spend nothing philosphy causes those dedicated RAID-10 sets to get merged and restriped into the larger storage pool, with the (vain) hope that more overall spindles in the shared pool will make the suck less sucky.

            In that kind of situation, it's not always crazy to spec a big box with a dozen of its own spindles for something performance oriented because you can't be forced to merge it back into the pool when the pool gets full.

            • Re:Busy databases (Score:4, Interesting)

              by Cylix ( 55374 ) on Thursday May 31, 2012 @11:00PM (#40175975) Homepage Journal

              That is more of an issue of accounting.

              At a shop I used to work out we broke out the cost per spindle and that purchase had to be paid by the org that need the resources. Absolutely everyone and their mother wanted completely horrendous amounts of resources for every 2 bit project. However, since we were in the business of actually ensuring there was a return on the investment we had to enforce resource allocation.

              This translated to a few interesting things. Projects had to be approved and had to be assigned a budget. Resources were fairly well managed and projected utilization was also fed into the overall purchase. We could not actually purchase new hardware without funds and you can be damn sure we weren't dipping into our org funds to pay for someone elses project. If the PHB had enough power to say "do it" he also had the power to allocated resources.

              Probably the only thing that made any of this actually work were that budgets were real. ie, this is the cash you get to fund your project... don't waste it all on licensing or else you get no hardware. (I also said a few things) Head count was also tied to this resource allocation. We had man hours to apply to given amount of staff and the only way to get more help was to ensure budgets were enforced. We were pretty good about ensuring budgets were enforced from even the lowliest tech because over expense could very well end that lowly guy on the food chain. (Being consciously aware of these makes helps to turn your most ineffective resource into the most effective!)

              Now, I had one moronic group under-spec and over spend their budget. I had to knock on some doors, but I effectively managed to get them donated hardware from groups who way over-killed on budget planning. They were grateful and I brought costs down by not putting more junk on the dc floor. However, I sometimes think I should have let survival of the fittest win out there.

    • by mysidia ( 191772 )

      Shared disk does not make I/O happy.

      It is just FINE to virtualize them; and shared disk is not an issue in any whatsoever. However, your consolidation effort will likely not be successful if done haphazardly, correct sizing, design, and capacity planning is important; choosing good hardware is important, not just "buying whatever server is cheapest and seems to have enough memory and large enough amount of disk space", because there are a LOT of details that matter, especially regarding CPU, CPU chip

      • by NFN_NLN ( 633283 )


        Don't use RAID5 for busy databases. Don't dare use RAID6 for any database.
        Don't use SATA drives for busy databases.
        In fact, if you are serious about consolidation,
        don't use RAID5 or RAID6 period.

        Treat the storage like a black box. Come up with an IOPS, bandwidth, and response times and ensure the metrics are hit.

        IBM sells an XIV composed entirely off the shelf components on SATA (http://opensystemsguy.wordpress.com/). EMC has dynamically allocated tiering called FAST (http://www.emc.com/about/glossary/fast.htm) over SSD/SAS/SATA. NetApp has cach acceleration but uses RAID-6. They all produce systems to meet enterprise level loads while violating those rules.

        Let the vendor size out a system that

        • by mysidia ( 191772 )

          Let the vendor size out a system that meets your requirements (metrics) without putting stipulations like above in the RFQ. They only cause problems and often eliminate viable options.

          Treat the storage like a black box. Come up with an IOPS, bandwidth, and response times and ensure the metrics are hit.

          Spoken like a true salesdroid; "Please ignore the man behind the curtain," just trust our performance claims, even when they defy what physics and math says the avg performance should be. The vendor w

  • by PPH ( 736903 ) on Thursday May 31, 2012 @08:05PM (#40174633)
    Virtualize management.
  • by amicusNYCL ( 1538833 ) on Thursday May 31, 2012 @08:06PM (#40174645)

    Assets not to virtualize:

    1) Women
    2) Beer
    3) Profit

  • by alen ( 225700 ) on Thursday May 31, 2012 @08:07PM (#40174655)

    ibm even says give it its own physical machine if you're going to virtualize it.

    • That is because each box is running a java container requiring a terabyte of ram to render some html output.

  • The company cafeteria isn't all that great, but jeez nothing is less satisfying than a virtual burger and virtual fries.

  • Gold, houses, aircraft carriers, 17th century Dutch paintings.

    • by Rob Riggs ( 6418 )
      I don't mind virtualized 17th century Dutch paintings, but my 10" tablet screen doesn't do justice to The Night Watch.
  • by danaris ( 525051 ) <danaris@NosPaM.mac.com> on Thursday May 31, 2012 @08:09PM (#40174671) Homepage

    How about backups?

    Consolidating and virtualizing your backup servers sounds like a recipe for trouble to me.

    Dan Aris

    • by dave562 ( 969951 )

      We run Netbackup and we virtualized the master node, but the media servers are still physical.

    • I run a large backup environment with Tivoli on IBM pSeries. We carve the pSeries up into multiple LPARs which write to a physical library which is logically carved up. I have to separate the backup environments due to regulatory issues and virtualizing both the backup servers and the library makes things much easier for me. I can set up between 4-8 LPARs per virtual library, and given the horsepower of the pSeries that I'm using, I don't have a ton of physical servers to manage.

  • Sure (Score:5, Funny)

    by FranTaylor ( 164577 ) on Thursday May 31, 2012 @08:11PM (#40174693)

    I would not virtualize the servers that are running the virtual machines.

    • Re: (Score:3, Interesting)

      VMware ESXi is actually a supported guest for VMware Workstation...

      Whilst that may sound crazy, it makes system design, testing, and generally skilling up a lot easier.

  • by codepunk ( 167897 ) on Thursday May 31, 2012 @08:17PM (#40174753)

    Well yes Databases would make a poor virtualization target. Also your heavy memory usage java app like the company app server using a terabyte of ram to display the department wiki.

  • Seems like a front-end serving statically cached content is a great match for Virtualization. DB servers and search servers (Solr, etc) aren't a good match imho, unless you have a very well implemented sharding/horizontal scaling solution. If you pre-generate your content, we've typically used hard boxes for those as well, but you may benefit from virtualizing those if you want to easily scale horizontally (assuming you have the hypervisor overhead).

  • by Anonymous Coward on Thursday May 31, 2012 @08:18PM (#40174771)

    Don't virtualize anything requiring tight scheduling or a reliable clock, such as a software PBX system performing transcoding or conferencing.

    http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf

    • Re: (Score:3, Interesting)

      by Zocalo ( 252965 )
      +1 This. The situation is a lot better than it used to be, but VM software tends to have a problem with keeping an accurate clock and that can bite you in some interesting ways, such as:
      • Pretty much anything to do with authentication; that means Active Directory, Kerberos and LDAP for sure, and depending on your setup might also include backends hosted in those systems such as AD/LDAP integrated DHCP/DNS if you have short TTLs.
      • NTP servers. Just don't go there. It should be obvious why, given the statem
    • by tconnors ( 91126 )

      Don't virtualize anything requiring tight scheduling or a reliable clock, such as a software PBX system performing transcoding or conferencing.

      Pffft. We're running cisco's voip stuff one one of their cisco UCS chassis here. Not a problem, and entirely supported.

      NTP hasn't been a problem for years, so long as you read and understand the VMware document and have some reasonable knowledge of NTP (more knowledge than the people packaging ntp for redhat, is unfortunately required).

      Don't *ever* fallback to loca

      • by tconnors ( 91126 ) on Thursday May 31, 2012 @11:03PM (#40175993) Homepage Journal

        Get nagios to monitor each VM, and each host (meh, not so important - only for esxi host logfiles), compared to the ntp server(s), and compare the server against a smattering of external hosts (perhaps including your country's official time standard if you're a government organisation). We're monitoring 250 VMs here, and none of them have ever been more than 0.1seconds out.

        I forgot to say: monitor against external ntp providers (asking Networks to punch holes through firewalls for your monitoring host(s) appropriately) even (especially) when using super expensive GPS clocks as your stratum 1 source. You have to remember that GPS receivers are manufactured by cheapest bidder incompetent fools who don't even understand how TAI-UTC works, hence why they're lobbying to abolish UTC time. Symmetricom, I'm looking at you. Good thing leap seconds are updated in the ephemerides at UTC=0, so on the east coast of Australia, are applied erroneously 3 months early when optical telescopes aren't observing the sky.

  • Pretty much anything can be successfully virtualized if you throw enough hardware at the host. Just keep in mind that these machines are all actually running on the same processors, and there's probably going to be a way to escalate rights from VM to host or VM to VM. In your environment this may not be an issue, but it's worth keeping in mind.

  • The only workloads that you can't really virtualize tend to be things like OS/400, but that is where things like LPARs can come in OR workloads that do a lot of privileges calls to the CPU or a specific instruction set. Also there are a slew of non-technical reasons I've seen like in Healthcare or Pharma where a specific machine is written into a specification for drug manufacturing or such.

    Even still there really aren't any workloads you can't virtualize and realize some sort of benefit from. Even those

  • by Anonymous Coward

    - Telephony and rea-time voice/video servers (Asterisk for example). You don't want to explain to your big boss why his phone lines are having hiccups
    - Real-time servers, like game servers (Minecraft), as they constantly take up resources.
    - NTPD servers (Network Time Protocol). Worst case, run it at layer 0 (host machine). But not VM :) Preferable to have an old Pentium running that one somewhere.

    If you look at the pattern, these are all real-time services that have very little leeway for latency. Yes, it's

  • by Tynin ( 634655 ) on Thursday May 31, 2012 @08:23PM (#40174825)
    Makes little sense to not run on metal when using an HPC. I can understand the benefit of being able to better utilize the hardware you have, as well as the potential lessening of your datacenter footprint in space, cooling, electric, etc. but when you are dependent on having quick(and ever quicker) turn around times because of business needs, it hasn't been my experience that the cloud makes sense, at least in production environments. Granted, for Dev & QA HPCs, go for it, but not for production.
  • except BIG backend databases and keep your DBs separate from any disk groups used for virtualization

    • Agreed on the datbases ... although I've heard some interesting ideas w/ using database disks for backups of other systems.

      Basically, you spread your database across the inner 10% of the disks ... then use the other 90% for your backups of other systems. When the databases aren't at peak, you run the backups.

      This way, you spread the database across 10x the number of spindles.

      You could probably back up the database itself to the disks, but you'll want some logic to make sure there's more than one disk grou

  • I have seen the time on virtual machines hopping around -- even those that are running ntpd.
  • Food and water.
  • Most database servers are already doing the same things that virtualization accomplishes. SQL Server 2012 as an example can support multiple database instances, each with multiple databases and will use every last resource available, and be more efficient than hosting multiple copies of in their own OS instance in VMWare.

  • Network Gear (Score:4, Informative)

    by GeneralTurgidson ( 2464452 ) on Thursday May 31, 2012 @08:49PM (#40175063)
    Firewalls, load balancers, anything that is typically hardware with dedicated ASICs. These virtual appliances typically are very taxing to your virtual infrastructure and cost about the same as the physical box. Also, iSeries, but I'm pretty sure I could do it if our admin went on vacation for once :)
    • by gethoht ( 757871 )
      While networking in general represents one of the last things to be widely virtualized, it also represents one of the next big jumps in virtualization. Juniper, Xsigo, Nicira and a whole host of companies would beg to differ with your conclusion. In fact they're betting large sums of money that you're wrong.
  • by jakedata ( 585566 ) on Thursday May 31, 2012 @08:53PM (#40175099)

    Imagine coming up from a stone cold shutdown. What would be a super thing to have? How about DNS and DHCP? AD too if that's your thing. Some nice little box that can wake up your LAN in 5 minutes so you can start troubleshooting the boot storm as the rest of your VMs try to start up and all get stuck in the doorway like the Three Stooges.

  • If you're going to virtualize something that gets a lot of traffic then it makes sense to scale up the server and environment.

    If you're talking about virtualizing an enterprise scale server/server farm then you'll want a solution that is designed to handle that sort of situation.

    As some people said, shared disk doesn't make I/O happy. That's a key point which is dealt with in enterprise scale virtualization by spreading the load across many different systems. So the hit of shared load is mitigated by access

  • Sex partner (Score:2, Funny)

    by kawabago ( 551139 )
    Virtual sex is just not the same.
  • by kolbe ( 320366 ) on Thursday May 31, 2012 @08:58PM (#40175147) Homepage

    Depending on the environment and the available assets to the IT Department.

    As an example:

    Assume you have VMWare ESXi 5 running on 3 hosts with a vCenter and a combined pool of say 192GB of RAM, 5TB of disk, 3x1Gbps for NAS/SAN/iSCSI and 3x1Gbps for Data/connectivity.

    It would become unwise in such an environment (without funds to expand it) to run any system that causes a bottleneck on your environment and thus decrease performance for other systems. This can be:
    - Systems with High Disk load such as heavy DB usage or SNMP Traps or Log collection or Backup Storage Servers;
    - Systems with High Network usage such as SNMP, Streaming services or E-mail;
    - Systems with High RAM usage.

    For this example, any of the above utilizing say 15% of your total resources for a single instance server would ultimately become cheaper to run on physical hardware. That is, until your environment can bring that utilization number down to 5% or is warranted/needed/desired for some reason.

    In my environment, we have a total of 15 ESXi v5 hosts on Cisco UCS Blades with 1TB of RAM and 30TB of Disk on 10GbE. We do however refrain from deploying:
    - Media Streaming servers
    - Backup Servers
    - SNMP/Log Collection Servers

    Hope this helps!

  • The advantages of virtualization are too great to not do it whenever possible.

    The only limiting factor is, really, is how much money you have to spend on your virtualization infrastructure. VMWare's licensing got a little nutty, and SAN storage got really pricy last year.

    But it's worth it. Once you have a nice VMWare cluster running, SO many things become easier. And some things that were damn near impossible before become simple.

    That said, you probably want to keep at least one domain controller and one DN

  • Just don't virtualize everything into a single host. Have multiple hosts and set the virtualization management to fail over. Otherwise losing one server means losing all the servers. Then only make enough VMs so that if one host failed, things would just run slightly annoyingly slow on the one picking up the load until the problem is fixed. Of course, don't let the annoyingly slow happen to anything mission critical with tight response requirements no matter what.

  • My 45 cents.... (Score:5, Informative)

    by bruciferofbrm ( 717584 ) on Thursday May 31, 2012 @09:07PM (#40175219) Homepage

    You can actually virtualize a whole lot of things. The real key is to put a lot of money into the virtualization hosts. CPUs/cores, ram, a really good storage system.

    For the small budget, you can get by on a lot less.

    I have virtualized several entire development stacks (app servers, DB servers, storage servers, reporting servers). {But you trade a bit of performance for a reduced physical server count (10 or 15 to 1? A pretty good tradeoff if done right)}
    You CAN virtualize SQL servers. Most business DB servers at the small shop end are fairly light load (like finance systems) and virtualize well. {But if performance makes you money (ie: you have a SAAS product - then stay physical }
    You CAN virtualize an entire Cognos stack (it is made up of many servers depending on how complex your environment is). {However, IBM is correct that in a heavy BI workload environment deserves physical servers. I run over 18,000 reports a day on my produciton stack. Not going to virtualize that any time soon.}
    You CAN virtualize entire profit generating websites. {As long as you keep an eye on host CPU and perceived performance at the end user level}
    You can virtualize a lot of this in relatively small environments.

    But.. Everyone here who has said it is correct: DISK IO is a major contention point. If you stuff all of your virtual machines inside one single giant data store (VMWare term for a single large disk partition) and expect super fast performance 100% of the time, then you will be greatly disappointed. One of my own stacks would grind to very intollerable performance levels whenever someone restored a database to the virtualized DB server. We moved that DB server virtual machine's disk load onto dedicated drives while leaving the machine itself virtiaulize, and all those problems went away.

    Do not virtualize anything thar requires multiple CPUs (cores) to operate efficently. SQL Server is an example fo something that works better with multiple CPUs at its beck and call. In virtualization though, getting all the CPU requests to line up into one availabe window bogs the whole virtual machine down (jsut the VM, not the host). If your workload can't survive on a single virtual CPU, or two at most (single core each), then you are best to keep it on a physical server.

    Time sensative systems and high compute workload processes are also ideally to be left out of virtualization. Except.. If you can put a beefy enough box under them, then you might get away with it and not notice a performance impact.

    The biggest mistake made when going into virtualization (besides not planning for your DISK IO needs) seems to be over provisioning too many virtual machines on a host. This is a dark trap if you are lucky to have the money to build out a high availability virtualization cluster. You spread you load across your nodes in the cluster. Then one day one goes off line and that workload moves to another node. If you only have two nodes, and one is already over subscribed, suddenly the surviving node is way over its head and everything suffers until you get in and start throttling non esscential workloads down.

    So, what do you not virtualize? Anything where performance is critical to its acceptance and succcess. Anything that a performance drop can cost you money or customers. (Remember that internal users are also customers).

    Plan ahead ALOT. If you feel like your not going in the right direction, pay for a consultant to come in and help design the solution Even if it is only for a few hours. (No. I am not a consultant. Not for hire.)

  • Clock stuff (Score:5, Informative)

    by Charliemopps ( 1157495 ) on Thursday May 31, 2012 @09:07PM (#40175227)
    I work for a telco and we've virtualized a lot of stuff, including telephone switches. We ran into a lot of problems with things that required reliable clock/syncing. 2 way video, voice, television, etc... using virtualized services on a VM machine doesn't seem to work very well either. So if you're using SAS or other cloud based stuff and your users on a virtual machine you run into all sorts of weird transient issues. Lastly, it may seem silly, but if you're company is anything like mine, you have a lot of people that have built their own little tools using Microsoft Access. We didn't realize just how many MS Access tools were out there and how important they were to some departments until we put them on virtual machines and found out that the MS Jet database is a file system based database and definitely does not work well with virtualization... so bad in fact that in many cases it corrupted the database beyond repair. We even tried moving some of the more important ones over to Oracle backends (because frankly, if they're important they should've been all along) but even MS Accesses front end couldn't deal with the virtualization... even though it was an improvement. We finally ended up having all these micro-projects where we had to put all these custom made tools, many of them made by people that didn't know what they were doing and didn't even work for us anymore intro real DBs with real web-based frontends. Reverse engineering some of those was a nightmare. It's a can of worms we likely wouldn't have opened if we had the foresight. I'd personally like to see MS Access treated just like Visual Studio in that only IS is allowed access to it... but I don't make the rules so... whatever.
  • at my 250k employee company with a bajillion servers and workstations, virtualization is mostly a work-around for the ancient and technophobic company policy of seperating servers by the individual application they run. All of my department's server side stuff (except the database) can easily be run on one box with one active/active failover box in a different location. This is how it was demo'd, benchmarked, vetted, and tested. Corporate audit went ape shit that we had an APPLICATION SERVER that was al
  • by Alex Belits ( 437 ) * on Thursday May 31, 2012 @09:11PM (#40175267) Homepage

    Virtualization is a stone-age technology, useful for crippling hostile environments. This is why "cloud" providers love it, and developers use it for testing. Incompetent sysadmins use it in hope that they can revert their mistakes by switching to pre-fuckup images, having this strange fantasy that this shit is going to fly in a production environment.

    If you REALLY need separate hosts for separate applications in production environment (what you almost certainly don't in presence of package manager and usable backup system), there is host partitioning -- VServer, OpenVZ, LXC-based environments, up to schroot-based chroot environments.

  • by bertok ( 226922 ) on Thursday May 31, 2012 @09:12PM (#40175279)

    There are a few "workloads" that just don't like to be virtualized for one reason or another.

    Active Directory Domain Controllers -- these work better now under hypervisors, but older versions had a wonderful error message when starting up from a snapshot rollback: "Recovered using an unsupported method", which then resulted in a dead DC that would refuse to do anything until it was restored from a "supported" backup, usually tape. I used to put big "DO NOT SNAPSHOT" warnings on DCs, or even block snapshots with ACLs.

    Time-rollback -- Some applications just can't take even a small roll-back in the system clock very well. These have problems when moved from host-to-host with a technology like vMotion. It's usually a coding error, where the system clock is used to "order" transactions, instead of using an incrementing transaction ID counter.

    DRM -- lots of apps use hardware-integrated copy protection, like dongles. Some of them can be "made to work" using hardware pass-through, but then you lose the biggest virtualization benefit of being able to migrate VMs around during the day without outages.

    Network Latency -- this is a "last but certainly not least". Some badly written applications are very sensitive to network latency because of the use of excessively chatty protocols. Examples are TFTP, reading SQL data row-by-row using cursors, or badly designed RPC protocols that enumerate big lists one item at a time over the wire. Most hypervisors add something like 20-100 microseconds for every packet, and then there are the overheads of context switches, cache thrashing, etc... You can end up with the application performance plummeting, despite faster CPUs and more RAM. I had one case where an application went from taking about an hour to run a report to something like five hours. The arithmetic is simple: Every microsecond of additional latency adds a full second of time per million network round-trips. I've seen applications that do ten billion round-trips to complete a process!

  • by account_deleted ( 4530225 ) on Thursday May 31, 2012 @09:16PM (#40175305)
    Comment removed based on user account deletion
  • Is the Antichrist. Or the uncle-christ. Or something. Anyway it's damned slow.

  • by __aawavt7683 ( 72055 ) on Thursday May 31, 2012 @10:37PM (#40175835) Journal

    The whole article is worded as though written by an advertiser. This is nothing but Slashdot Market Research. Either it will be a hit business article, "What Not to Try and Virtualize, Straight from the Engineers" or research into how segments of the industry can convince you to virtualize that anyway.

    Must be nice, buy one website and you end up with a corralled group of wise and experienced IT gurus. Then slaughter them like sheep. This post was nothing but Market Research. Move along.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...