An Overview of Virtualization Technologies 204
PCM2 writes "Virtualization is all the rage these days. All the major Linux players are getting into the game with support for Xen, while Sun has Solaris Containers, Microsoft has Virtual PC, and VMware arguably leads the whole market with its high end tools. Even AMD and Intel are jumping onto the bandwagon. InfoWorld is running a special report on virtualization that gives an overview of all these options and more. Is it just a trend, or will server virtualization be the way to go in the near future?"
Hmmm (Score:5, Funny)
What happened to the CowboyNeal option?
Re:Hmmm (Score:5, Funny)
OMFG!! Isn't one CowboyNeal enough already?? Do we need an army of virutal CowboyNeals, posting Dupes in a Beowulf Cluster of Virtual Slashdots?
Re:Hmmm (Score:2)
Re:Hmmm (Score:2)
Just a trend? NO WAY (Score:5, Interesting)
Of course, as with all abstraction layers, it introduces complexity and takes a toll in the form of performance - but we all know absraction layers have been increasing all the time since the beginning of time.
Re:Just a trend? NO WAY (Score:5, Insightful)
Re:Just a trend? NO WAY (Score:3, Interesting)
Re:Just a trend? NO WAY (Score:3, Interesting)
All you need to do is insert the Knoppix LiveCD during a Windows session, let autoplay do its thing, then you are given an option of running Knoppix right from Windows. I never tried networking with it, so I don't know how it well it does that.
Re:Just a trend? NO WAY (Score:2)
Re:Just a trend? NO WAY (Score:3, Interesting)
Re:Just a trend? NO WAY (Score:4, Insightful)
Re:Just a trend? NO WAY (Score:2)
wrong (Score:5, Interesting)
It's not a question of "competence", there simply is no such thing as a uniformly "better" operating system or application. DOS, for example, is an excellent operating system for some narrow set of applications, and you can hack Mach or Singularity until the cows come home and you're not going to create something better.
I would have preferred a better, from the ground-up OS any day. Hurd, or ever better Singularity!
People like you are part of the reason why software sucks so badly: you simply don't understand real-world tradeoffs. People like you design systems like Mach or Windows, systems that try to be everything to everybody; people like you throw in MLOCs of useless features and generalizations and extensibility, and all you are doing is create bigger and bigger headaches.
Virtualization is doing the right thing: it lets people focus on creating operating systems and server configurations that focus on solving specific problems. Maybe with virtualization, we can finally kill the general purpose operating system.
Re:Just a trend? NO WAY (Score:3, Informative)
Re:Just a trend? NO WAY (Score:3, Insightful)
Re:Just a trend? NO WAY (Score:2)
I know this sounds like a bad thing but think of
Re:Just a trend? NO WAY (Score:5, Interesting)
Re:Just a trend? NO WAY (Score:2)
Re:Just a trend? NO WAY (Score:2)
I always thought that was meeting a Girl(tm).
Re:Just a trend? NO WAY (Score:3, Interesting)
While I do like virtualization, I think it is still in it's infancy. We migrated this ASP/VB/C++ legacy web application to virtual machines running on VMWare ESX, and after 4 transactions / second the host server was almost dead. The same application, on the same server but running native in Windows 2003, can easily do 35 transactions per second. Granted the application is coded using some archaic COM/RDS model but t
Re:Just a trend? NO WAY (Score:3, Insightful)
Re:Just a trend? NO WAY (Score:2)
I am just a code monkey, so bare with me, I don't know the exact details. We had ESX at a certain patch level, one of our VM guys moved a database server from one host to another. The server wouldn't boot back up (Win2k3 + SQL 2005) after the move was complete. We contacted VMWare and they told us to apply a set of patches. The SQL servers were dead though and had to be rebuilt.
Re:Just a trend? NO WAY (Score:2)
Works well for license servers (Score:3, Informative)
Re:Just a trend? NO WAY (Score:3, Interesting)
Re:Just a trend? NO WAY (Score:5, Informative)
If you are using VMWare Server, please keep in mind that best practices say that you should generally NOT RUN SERVICES ON THE HOST ! It is far better to minimize the footprint of the host and create another VM to handle the services instead. There are of course exceptions to this such as when an application needs physical access to hardware that VMware can not supply or emulate, but they are not common.
If this doesn't help you, please check the VMTN forums for help; they have a points system for questions/answers and are generally one of the better free support forums for any commercial product I have ever seen.
Re: (Score:3, Interesting)
Trendy!!!!! (Score:3, Insightful)
Well, yes, if you're a geek who likes to play with a dozen OSs, you'd much rather open on a new VM then reboot your machine. But as usual, we're confusing geekworld with the real world. The use of desktop VMs is pretty limited outside geekworld — mostly Mac folks who have one or two Windows apps they can't live without. That doesn't do a lot to explain why so many heavy hitters ar
One vote for VM Ware ESX Server (Score:3, Interesting)
Re:One vote for VM Ware ESX Server (Score:2)
Re:One vote for VM Ware ESX Server (Score:2)
In my book it's one of two things:
1) Virtual networks (including the much improved vlan support in 3.0)
2) Memory page sharing. People argue that solutions like VMWare have X% performance penalty over something like Xen yet when you are building up a cluster for any type of redundancy -- are you going to double the amount of RAM in your hosts just so you can take on extra VM's in a failure?
Of course it's a trend (Score:5, Insightful)
With the growing evidence of the human brain's ability to rewire itself and route around failures on the fly, and the effective virtualisation of perception (why do I appear to see a three dimensional picture of the world when I have only 2 curved arrays of photosensors?) we are probably just following a well trodden evolutionary path.
Re:Of course it's a trend (Score:3, Interesting)
I have been lurking some VM forums and the consensus seems to be to avoid blades whenever possible.
Re:Of course it's a trend (Score:2)
By the time you spec individual machines with all the same redunancies you get when you just plug a blade into a chassis, it about balances out if you are doing more than 5-6 machines or so, at least from what I saw when I spec'd our new equipment.
You also have to recognize how easy they are to install and manage. All your network and storage switches are just modules in the chassis; no cable! Particularly with Fibre Channel this is a godsend and a huge time, mon
Re:Of course it's a trend (Score:2)
Yes they are nice when you consider how easy it is to just slide a blade in and go but it doesn't end there. Those modules have limits and as long as you can live within those limits then you will be fine.
* Some blades implementations are limited to 2 NICs. This is less than ideal for VMWare.
* The same goes for FibreChannel connections. Less than ideal for some applications.
* Some chassis th
Re:Of course it's a trend (Score:3, Insightful)
Only because of their initial expense. I do pre-sales technical support for IBM storage, so I asked my server counterpart, and the reason they're prices higher than rack or tower servers is that they cost less to cool. Over a year you get the difference in cost back in your AC bill.
Re:Of course it's a trend (Score:2)
Just becuase the sales team and the marketing literature say it saves money doesn't mean it really does save money. You may sense a bit of distrust for salespeople but I have been in this business long enough to know that salespeople are only interested in selling their products/services regardless of the customers best needs.
Re:Of course it's a trend (Score:2)
It's not a fit for everyone, but there are enough fortune 500s buying blades to convince me that sometimes, they are cheaper. Your datacenter may not need them, but there are those
Re:Of course it's a trend (Score:2)
Evolution has focussed very tightly on specializiation within a given organism.
Although you could make an argument that evolution virtualizes over time. Body parts and brain areas originally evolved f
Virtual supercomputers for everyone! (Score:5, Funny)
1)Upgrade: simply change a few values in the config and presto! 50Thz processor!
2)No power consuption what-so-ever! I even get a net gain as I run a virtual powerplant.
3)No clumsy hardware on my desk. Just type at the virtual keyboard in mid-air! The virtual monitor can project from anywhere. Heck, they even follow you to the bathroom.
4)No virus, malware or spyware thread! All thanks to the virtual virus scanner.
5)Store up to infinite TB data on the UberDVD drive.
6)Comes with free pron, MP3, warez and Movie server. Complete with anti-MPAA and anti-RIAA card.
Soon to be released: The virtual Car(tm). Just hold up your hands like your holding a steering wheel and make motor sound to get anywhere in the world in just minutes!
Virtual technology. It's everything you ever dreamed of, and more!
Virtual car? Virtual plane is already done! (Score:2)
The business plan? (Score:3, Insightful)
I think many companies are looking for a way to monetize software by monthly or yearly fees - this can be their way...
No Mention of UML (Score:5, Informative)
It seems that as Xen makes progress, UML is getting ignored.
Re:No Mention of UML (Score:5, Informative)
Re:No Mention of UML (Score:2)
Re:No Mention of UML (Score:2)
A lo
Re:No Mention of UML (Score:2)
And IBM? Where are they? (Score:5, Insightful)
They talk about VMWare, Intel/AMD, the future Solaris on E10000, other things... but where is IBM?
They can do Virtualization for at least 3 years with their Regatta technology (P670, P690 (Power 4 technology), P530, P550, P560, P570, P575, P590, P595 (Power 5 technology)) and their OS AIX 5L.
they are able to give a few percentage of a cpu to virtual server, with their Virtual IO server, they also are able to virtualize network and disks. They can do workload management between virtual servers. Add/remove disks/cpu/memory in real time.
etc...
So for a complete discussion an overview of the virtualization in the industry, IBM is now a big player, and they are now surpassing SOLARIS & HP in the "closed" unix world.
So for me this overview is not complete and should not have passed the "draft" version until someone was looking at the actual and running alternatives.
L.G.
Re:And IBM? Where are they? (Score:4, Informative)
Since IBM practically invented virtualisation in the '60's for their mainframes (or possibly earlier (I'm not quite that old), I was quite surprised to see it missing from the Infoworld articles too.
IIRC, VMWare modelled their solution on IBM's implementation. They may have also licensed some of the technology to do it.
IBM == GODS OF VIRTUALIZATION (Score:5, Informative)
Intel and Xen even based their virtualization stuff on old papers from IBM documentation and whitepapers.
You want to know how hardcore IBM is?
THEY INVENTED VIRTUAL MEMORY. And no I am not talking about a swap file on your harddrive, you windows wennie. I am talking about the ability every PC has to abstract memory.. It's IBM's gift to the PC that made modern computing possible.
You aren't convinced of IBM's monsterious power?
They have it setup so that when you buy a OpenPOWER machine for running Linux you can get a optional firmware hypervisor to manage multiple operating systems. And it's pretty cheap also.. For the same price as a low end Sun Opteron box you can get a low end IBM POWER5 box.
But it's not just that... Get this:
IF you buy a Xeon cpu on a add-on card you can set up the machine to RUN WINDOWS.
That's right. Run windows with a fucking x86 cpu on a PCI CARD.. Sharing the same memory and harddrives as Linux running on POWER5. On the same machine. At the same time. With NO slowdown.
Still not convinced?
How about this, for a show of IBM's utter superiority in this feild:
We are running a 2000 era IBM Mainframe with a late 1970's operating system on a 1990's operating system with 1980's era tape drives for legacy reasons.
IT'S A THIRTY-ONE BIT (no NOT 32 bits. 31bits.) OPERATING SYSTEM ON A #$%#$% 64 BIT MACHINE. It's not even like going from x86 to x86-64. They are entirely different computer archatectures. AND it runs at near bare hardware speeds. It's incredable. AND we can run Linux next to it. At the same time. And not just one Linux install, but very literally hundreds of them if we felt like it.
It's completely nuts. They got shit that makes Vmware look like Dosbox. Microsoft's 'Virtual Server' isn't even on the radar; it's completely laughable in comparision.
That and it has the worst possible user interface imaginable. Think about the worst thing you've ever seen. Some DOS 2.x nightmare. Now add a OS/2 GUI and make it WORSE. Now imagine it worse then that. Now your getting close. That and we pay out the ass for the pleasure of using it. Ok, now make it slightly worse. That's about right.
Re:IBM == GODS OF VIRTUALIZATION (Score:2)
Re:IBM == GODS OF VIRTUALIZATION (Score:2)
Re:IBM == GODS OF VIRTUALIZATION (Score:3, Informative)
For example, IBM cannot currently migrate a running LPAR. In the next iteration of their technology they say they will be able to do that, but not now.
The lowest priced POWER5 is the p505, which lists for $3,399. The lowest end Sun Opteron is priced at $745. At that baseline price of $3,399 you
Re:IBM == GODS OF VIRTUALIZATION (Score:2)
Re:IBM == GODS OF VIRTUALIZATION (Score:3, Informative)
Re:IBM == GODS OF VIRTUALIZATION (Score:3, Informative)
Comment removed (Score:3, Interesting)
Re:Mainly a cure for bad software (Score:3, Informative)
Re:Mainly a cure for bad software (Score:2)
Even with Unix/Linux, you have dependencies on the kernel and database software.
Re:Mainly a cure for bad software (Score:3, Interesting)
Well for one, it makes separating X and Y onto different boxes a year down the road pretty well effortless. (whether its for load balancing, hardware upgrades, or whatever)
For another it makes upgrading X possible without having to worry about an impact on Y. Doubly hand
Re:Mainly a cure for bad software (Score:2)
Do I misunderstand, or is there are real advantage on running product X in one VM and product Y in another (or even second instance of product X)
Well, yes, it is a nice bandaid for some of the problems of bad software.
However, based on a previous attempt at a physical server consolidation, I can see a big advantage in that you can upgrade/reboot individual applications that live on the same physical machine as other applications without disrupting everything.
Also, it is easier to deploy an OS with spec
Re:Mainly a cure for bad software (Score:4, Insightful)
Security. Modularization. Having one part falling down not take down everything else.
For example, in my setup there are two servers:
* the old one: mysql, postgres, apache
* the new one: Xen
* pound (reverse http proxy)
* postgres
* mysql, apache
* subversion+backups
+ viewvc running as a different user with read-only access to the repositories
* a VM hosted for someone else
When I break the dev apache, the production one stays up. When apache goes down, subversion stays up. When any of my VMs go down, the one hosted for someone else stays; and the other way around.
And when someone pwns anything other than the dom0 (which runs just Xen and ntpd), they took over just that single part.
Sure, I could run everything without virtualisation. But I don't think I have to say why I prefer the way I've chosen.
And you can't claim that Citrix is a good product. Slapping a GUI on a server and "network efficiency" don't belong in the same sentence.
Re:Mainly a cure for bad software (Score:2)
That will sneak in to you're machine naming as well, instead of big-frikking-server-doing-it-all. you got fs-01., fs-02., fs-03. distributed by say dfs.
Other example are database clustering, dns, firewalls and webservers.
Now this not a revolutionaire thing, bigger networks always worked this way, but with virtualization, the bigger network can better scale to th
Re:Mainly a cure for bad software (Score:2)
For example, I had lot's of headaches tunning a mailserver running a PostFix+Cyrus+Ldap, plus Apache+PHP+MySQL+IMP webmail. We started with 3000+ users, and it was everything ok until we reached 8000... then all sorts of performance issues appeared, an we could only understand what was going bad when whe isolated the services on separated machines.
A virtual machine is a nice way to do t
Re:Mainly a cure for bad software (Score:2)
1) What do you do when you have to take it down or have a hardware problem? All that stuff stops all at once. With VM's (depending on your solution) you can move services to other machines either live (while they are still running) or at least schedule the move during normal downtim
Re:Mainly a cure for bad software (Score:2, Insightful)
I remember this kind of argument from Mac devotees in the pre-OS X days when the Mac didn't have real protected memory, and still used cooperative multitasking. People would say that pre-emptive multitasking was just a crutch, that cooperative multitasking was cleaner and potentially more efficient, and that "good" programs would consistent
Re:Mainly a cure for bad software (Score:2, Insightful)
Remote recovery... (Score:2)
We need to ask M/s Microsoft, Intel, AMD, Sun etc. (Score:5, Insightful)
Microsoft does not seem to like virtualisation.. hell, they didn't like Terminal Services.. so they crippled it in NT4, made extra licensing restrictions with Win2K, and made the WinXP / Metaframe XP combn. a non-starter. In microsoft's world, users must only license MS's servers and everything needs a separate server
Now that the virtualisation market has grown IN SPITE OF the apathy of these s/w vendors... and the tremendous mindshare with Open Source technologies, these old chaps are trying to make money without doing anything themselves.. witness the recent MS licenmsing options in virtual segments, acquisition of IP, Intel's hypervisor efforts, AMDs efforts etc.
If virtualisation succeeds, it could spell the end for DRM and Treacherous Computing initiatives... since these need collective collusion by all parties involved. Looks like the firms mentioned will try their damnedest to sidetrack virtualisation.. just like terminal servics and thin clients never reached their full potential. Open Source firms and nerdy sysadmins might well have the last laugh...
Re:We need to ask M/s Microsoft, Intel, AMD, Sun e (Score:2)
Isn't that the case with python, perl/parrot, java, ksh, tcl, etc? Any kind of virtual machine will have to have its own DRM, if DRM is to work at all.
Re:We need to ask M/s Microsoft, Intel, AMD, Sun e (Score:2)
Don't forget Linux Vserver (Score:5, Informative)
a quote (Score:5, Interesting)
Now guess who said that, and when. :-)
Robert P. Goldberg said that, in 1974.
The fun thing about this is, it's still a very accurate statement. Other than in 1974, though, it doesn't solely apply to mainframes, but, as someone wrote in an earlier post, to everyday computers: desktop systems. I think that's great, and the above quote is more true than ever. Working on Mac OS X and having a Parallels session up and running where some Java application (for example) is tested in a Windows or what environment... lovely.
Yes, I'm a virtualisation enthusiast, if you haven't guessed so already. ;-)
Way too long of a FA, and not exactly accurate. (Score:3, Interesting)
From TFA:
>> Use the "dd" command to copy the boot drive from another server to a local file, point Xen at that file, and boot
>> the VM (virtual machine). Who needs consultants?
Apparently, the author does, and they have not been reading the Xen devel or user's mailing lists.
File backed virtual block devices can be very problematic for high volume services and applications such as MySQL, Apache and others. Most of us really using Xen on deployments that 'matter' have switched to SANS and using either LVM or real partitions.
Think about how long it takes to create a 3 GB loop device, then copy over the contents over a 10 or 100 meg switch (as you'd find on a hobbyist's desktop).
Migration only takes a few seconds once that's done
If you want to write information on hot topics to draw readers and slashvertise it, great - go for it. Just be sure its accurate.
They also barely touched on what is so magic about running 32 bit guest kernels inside of a 64 bit host, the new Xen credit scheduler, and other really cool things going on with Xen.
If you're going to present yourself as an authority, please present fact, and all of the facts. Please don't setup something like Xen (which many people are working very , very hard on, HP, IBM, Novell, Redhat to name a few) to just dissapoint new users. Nobody would say "Wow that article must have been wrong", they'll say "Wow, Xen is too hard to get working like that article said". Be careful what you capitalize on to sell a few ad clicks
Re:Way too long of a FA, and not exactly accurate. (Score:4, Informative)
An interesting way to accomplish file-based fast migration is to nfs mount an area on the target server, then use md (in the virtual machine) to place a mirror there. Then you have no need for the lengthy copy, you already have a synced up online copy there.
Not saying it's good, just saying it works (and a useful alternative if you dont have a better shared storage)
Re:Way too long of a FA, and not exactly accurate. (Score:3, Informative)
You can't mount bsd slices as a loop device. You need a utility like lomount. Here's a copy [netkinetics.net] if you read the article and want to play with Xen/NetBSD. Compiles easily with gcc.
Just another example of how you can frustrate people with mis-information, and give the topic of your article the bad rep.. when it was really a lack of research on your part.
Cheers
Re:Way too long of a FA, and not exactly accurate. (Score:2)
>> To be honest, I haven't read this article. The comments about it in Slashdot have been very informative, and I
>> don't feel the need!
That's sort of like farting in an elevator and taking credit for it on the spot. While some may quietly chuckle to themselves and admire your bravery, publicly they are compelled to bitch-slap you.
>> The question for me: Is it better to launch a thousand techies enthusiastically at a new technology, or 500 of
>> them with mis-givings? The a
Rumours (Score:2)
Re:Rumours (Score:2)
The application requirements will simply say "Requires MacOS 10.5 with Virtualization to Run"
Development won't stop, obviously there are already people programming for Macs. But, what about potential Mac software, say you like Program X, and would like it to integrate seemlessly with yo
Re:Rumours (Score:2)
If that were true, it would more likely be the Mac's downfall. Why would developers (That is, developers who aren't already developing ON a Mac) port or support their applications for MacOS if Macs can run Windows software.
This depends upon a number of factors. Is the VM environment installed by default? Does it have the same look and feel as OS X or Windows? Is it fast? Does it run graphics at nearly full speed or greatly slowed? What is the market share of the mac after a couple of years of this techno
Consolidate Costs . (Score:5, Informative)
I did this for a company with over 2000 unix servers and averages were : only 20% of the hosts would use more than 30% of the CPU
It's a known fact that for most of the projects the hardware is super sized over what's really needed, and this is one of the main advantage of virtualization : it is seen as a cost reduction process.
Re:Consolidate Costs . (Score:3, Interesting)
At the
Where is the real info? (Score:4, Insightful)
Thanks to other technologies I've run similar systems for ages. It is entirely common for me to develop a file system driver while keeping Mac OS X, Windows, Linux, and DOS running on the same system. I've done this for a long time as well. The difference is that the operating systems would be virtualized by running system emulators instead of using CPU technologies for system segmentation. I did this in the old days under DOS using Quartdeck Desqview and a CPU emulator.
First thing that people really need to understand at this point that virtualization as we're using it today is little more than finding a method to lauch operating systems as "processes" under another operating system. This is not magic, for the most part it's something that any operating system developer should be capable of. The issue is more of grinding. It takes the right kind of people to sit and grind through each of the problems that come up with running like this. It's the same idea as writing a Windows compatible API stack. You start off with simple programs you have the source for and work your way up through more complex applications that require direct hardware access. It's a matter of intercepting the calls and handling them as if you were the real thing.
So here's the deal. As a system level developer, I am more interested in what these guys are actually doing in order to make it happen. Let's face it, although Intel and AMD are adding virtualization technologies to their processors, the actual task of switching between CPU contexts is hardly an issue. The real issue is how are they handling hardware emulation.
See, to me, I focus on high performance workstation related tasks. Servers are cool and great, but in reality, it's how it performs on the desktop that is truly important to me. What I want to see is that a vendor grinds a little more on this issue.
VMWare has classically written device drivers to handle hardware interfacing with better performance than others. So instead of simply emulating the VESA BIOS extensions and providing access to an SDL style frame buffer, instead they have written drivers to allow graphics acceleration. So what I really want to see is that they take it a step further....
I want more than just accelerated BitBlt functions. Of course in the 2D desktop world, high performance frame buffer moves are not optional but required since the bus bandwidth required to copy large frame buffers all around is outrageous. But in the days where OS X uses OpenGL and Windows Vista uses DirectX, I want drivers that interpret 3D contexts as well.
So here's what I'm thinking... write a 3D driver for Windows, Mac OS X, X. The driver should of course offer frame buffer handling, but this shouldn't be the focus since it isn't used for much more than boot and text mode processing. When an OpenGL context is created, instead of creating the context native to the virtual machine, the context should occur on the host operating system and should be managed there. The only interprettation should occur when the graphics driver informs the guest operating system of the top level context.
For direct X, well, I've seen at least one virtual driver in the past which implemented Direct X on Open GL. For professional graphics, Direct X is typically seen as a toy although in reality in many ways it's more powerful than OpenGL (don't argue, it has to do with what's more important to hardware vendors so their drivers are optimized for game based testing). So, since most professional graphics packages are OpenGL based, then the virtualization software vendor should simply implement a translation layer ov
Is this a trick question? (Score:3, Insightful)
But there's also an opposing trend (Score:2, Interesting)
System architecture is changing in a profound way that will somewhat limit the commoditization on which virtualization depends. It's not just a matter any more of CPUs doing calculation and ordering up random disk accesses. RAM speeds, memory bus speeds, interprocessor pipeline speeds -- that stuff all matters a lot now. This is most evident in data warehousing/analytics, where data warehouse appliances (Netezza, DATallegro) and even memory-centric technologies (SAP, Applix) are becoming more important,
It will continue, and next step is (Score:2)
VMWare and the cool recovery options it provides (Score:3, Interesting)
They weren't running one of our replicated setups, so we were expecting to spend the next week rebuilding the server and configuring our software.
Instead, they grabbed the most recent backup of the VMWare image and booted it up on a completely different server over 100 miles away.
End result?
About a day's lost data and an hour of down time. (The backup was already at the remote site)
I've been pushing for VMWare usage in our test environment to reduce our hardware needs and time spent restoring Ghost images, but a few managers are still dubious, and are afraid we might "miss some hardware issues" if we go that route.
The other end of virtualisation (Score:3, Informative)
not just a workaround (Score:4, Insightful)
What virtualization really is is a long overdue standardization of a set of APIs that exist in many operating systems but remain hidden. By finally exposing them, we gain functionality that didn't exist previously.
Hardware/OS? (Score:2)
Re:Hardware/OS? (Score:2)
Re:Hardware/OS? (Score:2)
Re:Hardware/OS? (Score:2)
Which has nothing at all to do with the Solaris limitation. Even commodity virtualization technology such as VMWare Workstation allows you to have different versions of the OS (and mix different OSes) in the same host. The Solaris containers cannot do this. Even the nascent Xen technology has the ability to run different OSes under one host.
Cost Effective (Score:2)
Just the next step in the evolutionary process (Score:3, Interesting)
Don't forget that this is just the first or second generation of this technology; in future we are likely to see multiple operating systems on one machine become much more commonplace, and as operating systems start to be built with this in mind, increased inter-OS communication in the same way that we have inter-process communication now.
Also worth noting is that we're moving away from the model of ramping up the clock speed on CPUs and moving towards a model of increasing the number of processing cores (dual-core CPUs and SMP), and smart high-speed switched buses (e.g. PCI express, 1/10/100GBps switched ethernet) - I believe that the computers of 10 to 20 years from now will be highly parallel, modular, hot-pluggable sets of processors and buses that will be able to intelligently allocate and partition resources between OSes and apps, and we will see a break away from the strict two-tier OS/program model and move more towards a much more flexible model with multiple levels of abstraction.
No notice of IBM Virtualization on pSeries? (Score:2, Informative)
Re:No notice of IBM Virtualization on pSeries? (Score:2)
The pSeries virtualization on the 590s and 570s is pretty amazing. You can dynamically add/remove processors and memory and has (most importantly) some very good monitoring tools. But it's expensive for the power you get.
In smaller shops needing Linux, Xen could be a VMWare killer. I'm impressed with the level of functionality in the current releases. Building out a new machine takes a few minutes.
cheaper, too (Score:3, Informative)
The only downside is that my basement server runs Debian and OpenHosting runs Fedora. But nobody's perfect.
What about... (Score:2)
Virtualization isn't the answer (Score:3, Interesting)
For these purposes, chroot is a better fit.
I've often wanted an equivalent for Windows, where I could run an application with a virtual registry, so that it didn't muck things up. Or so that it thought it had full access to the C:\WINDOWS folder. Instead, I have to use Virtualization s which requires 2 gigs of space, causes a 2:1 speed reduction, and cuts my available memory in half.
Even better yet, would be decent installers and applications that follow the rules.