Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

OS Virtualization Interview

ScuttleMonkey posted more than 8 years ago | from the new-distro-smell dept.

184

VirtualizationBuff writes "KernelTrap has a fascinating interview with Andrey Savochkin, the lead developer of the OpenVZ server virtualization project. In the interview Savochkin goes into great detail about how virtualization works, and why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux. Regarding virtualization, Savochkin describes it as the next big step, 'comparable with the step between single-user and multi-user systems.' Savochkin is now focused on getting OpenVZ merged into the mainline Linux kernel."

cancel ×

184 comments

Sorry! There are no comments related to the filter you selected.

ZSuk my dickie (-1, Troll)

NessusRed (710227) | more than 8 years ago | (#15154321)

Do you liky? I is oriental. UCK THE OTHERIANS!

I'm not convinced... (2, Interesting)

SGrunt (742980) | more than 8 years ago | (#15154330)

...that virtualisation is going to be that much of a Big Thing(tm). Those that will get the most use out of it will be the would-be dual/tri/mega-booters, and, let's face it, compared to the number of computer users in the world - heck, to the number of people that know roughly what virtualisation is - that number is going to be quite small.

Re:I'm not convinced... (3, Insightful)

jgold03 (811521) | more than 8 years ago | (#15154368)

well isn't Linux used mostly for server operations? Virtualization also adds a layer of safety and security between child OSes and their processor.

Re:I'm not convinced... (0)

Anonymous Coward | more than 8 years ago | (#15154481)

well isn't Linux used mostly for server operations?

No.

Or, in more detail, as measured in dollars-of-revenue, perhaps the biggest segement for Linux is servers; since that's what vendors sell --- but in units and in amount of work done workstations is a far bigger market.

For example, here we have paid subscriptions RedHat's on the servers, and free-copies on the workstations -- but the workstations outnumber servers by a dozen-to-one.

Re:I'm not convinced... (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15154502)

Parent wrote: <i>as measured in dollars-of-revenue,perhaps the biggest segement for Linux is servers; since that's what vendors sell --- but in units and in amount of work done workstations is a far bigger market.</i>

Indeed dollars of revenue is a uniquely poor way of measuring the success of software.
The best analogy I've heard is market research of
breathable gases.   Any market research company
would happily conclude that that Tobacco
Smoke is a far more desirable breathable substance
than air.   Just look at the revenue numbers:
   Cigarettes - $48.7 billion in 1997
   Cigars     - $ 0.9 billion in 1997
   Fresh Air  - $ 0.0 billion in 1997
So the obvious conclusion is that if you're a business
the revenue figures obviously show that best
practices in the industry is to use smoke.

Absurd, yes; but it seems that's how most corporations
pick their databases and operating systems.

Re:I'm not convinced... (1)

timeOday (582209) | more than 8 years ago | (#15154562)

I don't see why such a layer is necessary, or what it will ultimately provide. The OS is supposed to protect users and apps from each other! If virtualization becomes widespread, it will have to take on more and more of the roles of an OS until it *is* an OS. For instance, an OS has a bunch of logic (a scheduler) to grant processes "fair" access to the CPU. With virtualization, you need another scheduler to schedule among the schedulers!!

Re:I'm not convinced... (1)

BrainInAJar (584756) | more than 8 years ago | (#15154685)

Exactly. something to sit above the kernel, or "supervisor"... something like a "hypervisor", which is exactly what xen's marketing department wants us to call the xen kernel

Re:I'm not convinced... (2, Interesting)

kesuki (321456) | more than 8 years ago | (#15154849)

actually, the virtualization software or the 'host OS' itself handles the scheduling. in server farms quite often the virtualization software runs 'bare metal' (eg: the system boots straight into the virtualization software, and loads any images etc.) but most geeks run it on top of a full fledged Os where the software can rely on any built in shcedulers etc. I have noticed that certain devices (soundcards, for example) don't always play nicely with being shared, but others (LAN cards) handle being shared very transparently. there is room for improvment in sound cards, saddly there seems to be little motivation to innovate. sytle over substance seems to be the name of the game, although in this case that means 'sounds clearer' over actually being able to processes multiple simultaneous audio effects.

well there is the Audigy 2 X-Fi series, which on paper is a dramatic improvement, but is 8 simultaneous real-time sound events fast enough? I just kinda wonder because in the games I play (online), most people use hot keys to toggle sound effects anyways.

besides which i'm not even sure if the audigy x-fi cards would even work properly with virtualization software. but, i can't think of another card with as much technical capability for generating sound effects. although i'm not that familiar with the $1000+ range products on the market.

Yep... (0, Flamebait)

msauve (701917) | more than 8 years ago | (#15154621)

and if someone really want to slow their servers down by running multiples on one processer, they can just buy a bunch of $20 486's on eBay instead, and get better reliability (a h/w failure only takes down 1 server instead of many).

What's the difference between having a virus/worm/rootkit/zombie infection on a virtual server vs. a real one? You still need to rebuild/restore to recover.

I suppose it's useful for an individual who wants to run multiple OSes, and easily/quickly switch between them, but that's a very small Slashdot/geek thing (which is of course why the article appears here).

Re:Yep... (4, Informative)

Anonymous Coward | more than 8 years ago | (#15154745)

A virtual server can be restored in seconds, no rebuild required. A virtual server can be moved to another host server in seconds without ever shutting down. A virtual server has a common hardware configuration and can be moved to another host with completely different physical hardware in seconds without shutting down (you can mix Dell and HP servers for example and switch between them on the fly). Not every virtual server needs dual Xeon processors and 8GB of memory, but a bunch of virtual servers can run on that machine and share load as required and if one of those virtual machines needs a little extra umph for some biweekly processing, it has the ability to grab more resources or the other virtual servers can be moved off to another physical server hosting virtual servers with more power without ever shutting it off [1]. Redundancy in the virtualization world requires two physical host servers each able to carry the load of all the virtual servers and a shared disk area (SAN, iSCSI). To have that level of redundancy in the plain of non virtual world, each server would have to have a second physical server for backup and unless you were clustering, you would not have the ability to move over your processes to the backup physical without some type of interuption if one of them suddenly failed like in your example.

Virtualization has many advantages in the enterprise and the ability to recover from a virus in your example is one small part of the whole package.

[1] Host servers can share memory between virtual servers, not just the total memory but the memory between machines as well. Very simple example but if you open sol.exe on one of the virtual servers, you will not take up any more total memory on the host machine by opening sol.exe on another virtual server on that same host. The memory is shared between the running virtuals as well. This works great when you have quite a few of the same OS being virtualized on a host. You could run 10 plain vanilia virtual copies of Windows server 2003 and the total memory taken up on the host will be less then 1.5 times more then a single running copy of that OS, not 10x of a single virtual. That example of 10 exact copies is not likely in real life but the common memory is shared which can make up for a significant amount of total memory savings.

Don't let your lack of insight or knowledge of the capabilities of virtualization get in the way of your opinions ;)

Re:Yep... (1)

alfarid (932950) | more than 8 years ago | (#15154927)

I would like to point out that energy efficiency goes down, and in our times of energy wars, mc*c is the most dreaded thing. So virtualisation is meant to become the energy compliant thing. but, i'm with you on slashdot/geek thing.. only until Oracle and IBM will acquire some market players. hmm.. maybe they should buy VMWare :)

Re:I'm not convinced... (1)

Sqwubbsy (723014) | more than 8 years ago | (#15154395)

Do you have any specialized server that don't warrant their own full DL360 or whatever your low-end rackmount is? Do you have multiple processes that, while constant an ongoing (think mail routing) don't require ALL of the system resources?
The key to Virt is that you don't have to reboot, but can run multiple OS's/processes side-by-side. This is good for testing and deployment for one, but for largely autonomous server processes for another.

Just because you don't have a use for the tech doesn't mean it's worthless.

Re:I'm not convinced... (5, Insightful)

Abcd1234 (188840) | more than 8 years ago | (#15154396)

Uhh... these products aren't aimed at your desktop box. They're for use in server farms, where virtualization provides an additional measure of security, along with providing the server operator more flexibility in how their hardware is utilized.

Indeed! (1, Offtopic)

babbling (952366) | more than 8 years ago | (#15154852)

That's true, but come on, it's going to be pretty fun to play with on desktop machines, too, isn't it? Imagine all the tricks you can play on computer-illiterate friends/family. One second it's Windows, the next it's MacOSX, then 10 seconds later it's Linux! Heads may explode.

Re:I'm not convinced... (1, Insightful)

Spy der Mann (805235) | more than 8 years ago | (#15154855)

Uhh... these products aren't aimed at your desktop box. They're for use in server farms, where virtualization provides an additional measure of security

If windows apps (or group of apps) were virtualized, we could use activex webpages without having to worry about spyware. Just close the virtualization window and it's gone.

The same for e-mail, if you restrict write access only to the mail files, and all spawned process from the e-mail were virtualized. If it screws up, the most you lose is your e-mail, but no viruses or infections would be produced.

What to say of websites? Virus^H^H^H^H^Hfree games installation would be only temporary (or perhaps session based? Hmmmm interesting) and you wouldn't have to worry about becoming a botnet.

So yes, virtualization for Windows would be awesome.

Re:I'm not convinced... (2, Insightful)

billcopc (196330) | more than 8 years ago | (#15154925)

That's brilliant, instead of actually expecting secure software, let's just use a 40 pound sledge to drive a nail. Virtualization means running a nested kernel, I don't feel like booting a sub-OS everytime I want to check mail or open a browser. It's far more efficient to just write the app properly.

I guess the true question is: Which solution is more likely to get attention ? Whiz-bang virtualization will probably win, since it seems very few people in this world have the patience and discipline to write respectable code anymore.

Re:I'm not convinced... (2)

Spy der Mann (805235) | more than 8 years ago | (#15155234)

Virtualization means running a nested kernel

No, it isn't. Didn't you RTF... oh, right, this is slashdot. Nevermind. :P

Re:I'm not convinced... (1)

mcrbids (148650) | more than 8 years ago | (#15155413)

If windows apps (or group of apps) were virtualized, we could use activex webpages without having to worry about spyware. Just close the virtualization window and it's gone.

On more than one occasion, I've trolled the warez sites for a "key generator". These are programs that you run that give you a workable key for a particular software product - but they are almost ALWAYS loaded with spyware and other easter eggs.

But, with VMWare, it's no big deal. Take a snapshot, download the generator & run, write down key, revert to snapshot. Snap! Done!

I treat such software products as a sort of "try before you buy" - and I've bought numerous products after reviewing them in this manner. (For example, Qarbon, Dreamweaver, PC/Anywhere)

VMWare is really, really cool, though - there's nothing quite like running 3 or 4 virtual systems in a coordinated network application, all on your laptop while in the airport waiting for the plane to land, to really see what it's all about.

Also, my Windows VM has b0rk3d itself several times after an otherwise innocent update or something, and in the case of a physical install, I would have had to re-install. But, with VMWare, I just revert to snapshot, and 5 minutes later, I'm back up and runing.

SWEET!

Redhat Fedora Linux makes a *great* host O/S for software development, especially when combined with VMWare. What's more, VMWare is FREE! [vmware.com]

Re:I'm not convinced... (4, Insightful)

NitsujTPU (19263) | more than 8 years ago | (#15154408)

Nah nah nah. It's going to be great. Picture this. You manage a university computer lab. The computers all have identical software, and all of the students files are stored on a network share. When computers are not in use, you'd like to dedicate the cycles to a long-standing distributed computation for experiments carried out by one of the departments.

The student logs in and a disk image runs their OS of choice, they don't have to reboot or know much, they just click an icon saying which OS, which instantly is presented to them. A batch process manager removes the load from the distributed experiment from their machine.

Or, perhaps something that's already fielded. You're a graduate student, and want to emulate 1000 compute nodes for a distributed computing experiment, you log into emulab, and tell the 50 that you've signed up for to boot 20 OS's a piece, and emulate a 1000 node network.

Or, perhaps you're studying viruses (this has also been done), and want to build an Internet scale honeynet.

Or, perhaps you're running a large server farm. You want an easy way to load balance a multitude of services, so you can run something that looks like 100 servers on perhaps 50. By dynamically balancing across nodes, services can automatically adjust themselves, independently of mechanisms built into their software (to some degree). When you want to add new hardware to the network, you just plug in the machine, and tasks start being farmed to it. When you want to retire some, you just tell the manager to stop moving tasks onto that machine, and wait for the tasks on that machine to move off.

Briefly put, VMMs rock. You have to think outside of "geeks playing with VMWare" to really see the interesting applications though.

Could someone explain briefly what it is? (1)

maillemaker (924053) | more than 8 years ago | (#15154913)

Thanks for the post, it gives me some insight into what virtualization is. But I'm still confused about what it actually does. I read this entry over on wikipedia:

http://en.wikipedia.org/wiki/Virtualization [wikipedia.org]

Does virtualization basically run multiple OSes on one box? Make one computer appear to be 2, or 3, or n?

Steve

Yep (2, Informative)

XanC (644172) | more than 8 years ago | (#15154957)

That's basically the idea. A single machine can be running several different systems at once, and each one can have its own kernel, network settings, tuning for a particular task, whatever. You can set up the network however you want; you can even simulate subnets and routers and who knows what to try stuff out.

Another big advantage is that the virtualization provides a common "hardware" layer. For example, every VMWare "machine" sees standard VMWare "hardware", no matter what kind of metal it's actually runnning on. Want to move your "server" from your Celeron desktop to a big RISC server? You don't even have to reboot it. (It'll be inaccessible while you transfer it, but there are ways around that too.)

Re:Could someone explain briefly what it is? (1)

NitsujTPU (19263) | more than 8 years ago | (#15155021)

In the very simplest case, there is a program called a virtual machine monitor that multiplexes the underlying hardware. Operating systems that run atop this see the hardware as if they have exclusive access to it.

The cool part comes in what one chooses to do with this. See, now the operating system sets on something that in its simplest sense does this... but one can build more interesting things into the VMM that allow it to do things like snapshot the entire running operating system and move it across a network.

If one abstracts things in certain ways, then you get certain, rather amazing abilities. You could build a really beefed up VMM that looks like a full micro-kernel OS. This would give you very strong separation of services, making the isolated OS's very resilient to attacks on other OS services. Picture a system in which you have a database and a webserver running on one box, the webserver has a buffer overflow exploit, a malfeasant individual sees this and exploits it, hoping to nail your database... the database runs on the same physical machine, but is not succeptable to this attack, because its operating system remains unaffected, but you didn't need two machines.

Re:I'm not convinced... (1)

Forbman (794277) | more than 8 years ago | (#15155274)

Or, perhaps you're running a large server farm. You want an easy way to load balance a multitude of services, so you can run something that looks like 100 servers on perhaps 50. By dynamically balancing across nodes, services can automatically adjust themselves, independently of mechanisms built into their software (to some degree). When you want to add new hardware to the network, you just plug in the machine, and tasks start being farmed to it. When you want to retire some, you just tell the manager to stop moving tasks onto that machine, and wait for the tasks on that machine to move off.

You mean like the guys who wrote an article for Linux Journal about running SuSE Linux on an IBM Z-Series mainframe, partly to evaluate it with the Evolution server, and had like 6000 (or 60K?) virtual servers up, all running the Evolution server (and serving clients) quite nicely?

If it worked good enough, a couple of beefy (beefy as in lots of LPAR hardware) Z-Series could host quite a few virtual webservers vs racks and racks (or racks of blades) of PC hardware... The advantage of Linux in this instance scales as well...

Re:I'm not convinced... (2, Insightful)

dsginter (104154) | more than 8 years ago | (#15154446)

I'm not convinced that virtualisation is going to be that much of a Big Thing(tm).

Allow me to introduce you to the world of Big Business: upper management want the Big Business pay check but, post dot-bomb bubble, they want none of the penalties associated with taking a risk. So you have the "one application per box" mentality. All of a sudden, you've got 20 boxes running at 5 percent utilization.

Can you see where virtualization would provide "virtually" the same thing with better cost efficiency?

Make no mistake, virtualization is just as much about pleasing management as it is about making sense.

Nah never catch on ... (1)

MarkTina (611072) | more than 8 years ago | (#15154539)

All those mainframes running your banks wouldn't dream of using virtualisation ;-)

Re:I'm not convinced... - DON"T BE MYOPIC (2, Insightful)

jsailor (255868) | more than 8 years ago | (#15154541)

Virtualization is HUGE. It helps solve a major problem. With few exceptions, most data centers are running out of power, not space. Servers consume 70-90% of their power draw when the CPU(s) is(are) at idle - and most servers in corporate America run below 15% utilization. If I can combine 4-8 servers into 1, I can save a tremendous amount of power. Here's some simple math.
A server consumes 400 W at idle and 500 W when all 4 processors are pegged at 100% utilization. If I take 4 servers that normally run at 10% utilization and combine them onto 1 server that runs and 40-50% utilization, I've saved 1100 W (4 x 400W - 500W). This is a huge value proposition for anyone who manages a data center.

I can rant forever, but trust me - this is no fad. There is a serious value proposition here.

Re:I'm not convinced... (1)

subgrappler (864963) | more than 8 years ago | (#15154613)

sure, it might not ever become a household word... but it will on the backend, have a big overall effect in general whether anyone is aware of it or not. but more people have been introduced to running an OS on an OS... OS9 on OSX? and just this week i told 3 different people about vmware workstaiton/player as a way to run their old apps.

Re:I'm not convinced... (0)

Anonymous Coward | more than 8 years ago | (#15154797)

There are just so many uses for virtualization it's not even funny. Lots of good ones have already been mentionned. Personally, I just couldn't live without it. I already have a half dozen PCs, one of which is used only as a VM server, just so I don't have to have an extra couple dozen PCs laying around. It's handy to be able to fire up any kind of OS/app combination you need at any time you so please, like big DB servers you don't always need (Oracle; I mainly use it for compatibility tests and porting stuff) or just server stuff (like Active Directory - without needing a bunch of spare PCs), lets you have almost any Linux distro you'd like ready to go, test software - and software deployment in various ways - on various OSes easily, etc. This list of possible/practical uses is almost endless. Perhaps not everyone needs to do this in a home setup, but that's totally irrelevant, and doesn't make it less useful at all. We're also definitely having a serious look at VMWare's new free offering (used to be called GSX) for our next batch of servers - consolidation is where it's at. At the price we're paying for the servers (including support and all), we might as well make good use of them instead of order a bunch more that'll sit mostly idle, just costing more (in electricity/AC/purchase/support) and perhaps take up space.

OT question (1, Insightful)

tomstdenis (446163) | more than 8 years ago | (#15154331)

What's with "open" in the name of all these projects. Is anyone really impressed by that anymore?

Tom

Re:OT question (1)

SGrunt (742980) | more than 8 years ago | (#15154355)

At least it's a good indicator that it is OSS. Given an IT guy who's advocating the use of the stuff it might impress the boss now and again.

Re:OT question (2, Insightful)

tomstdenis (446163) | more than 8 years ago | (#15154363)

Bosses don't care if it's open source. They care

1. How much does it cost to license
2. How much does it cost to setup
3. What does it solve any better than what we already have.

Tom

Re:OT question (1)

Kyro (302315) | more than 8 years ago | (#15154372)

4. Who can we sue if it breaks

Re:OT question (1)

SGrunt (742980) | more than 8 years ago | (#15154381)

...the lack of a reasonable answer to this is part of the reason there hasn't been a wider adoption of OSS. :)

Re:OT question (2, Insightful)

jmv (93421) | more than 8 years ago | (#15154459)

Just curious, who do you usually sue when Windows breaks?

Re:OT question (1)

Kyro (302315) | more than 8 years ago | (#15154577)

Good point :)

I guess the supplier (IBM/HP/Dell whatever) is usually accountable for any breakage that occurs. Failing that, you can call any of the billions of small tech shops that fix Windows installations for enourmous amounts of cash (I used to work for one ;))

Re:OT question (2, Insightful)

jmv (93421) | more than 8 years ago | (#15154657)

Ask for support != sue. You can ask your Linux distro vendor for support too. I have yet to see any successful lawsuit over a Windows fault.

Re:OT question (1)

NitsujTPU (19263) | more than 8 years ago | (#15155445)

It's doesn't really matter. It's more about having someone to blame than actually extracting money out of them.

If everyone else is using Windows, and you want to use Linux, you're the black sheep, so they blame you. On the other hand, if Windows has a glitch, you whine about Windows a bit, and then everyone else on the planet does (because you better be running the identical configuration or, again, it's your fault).

Re:OT question (3, Informative)

subreality (157447) | more than 8 years ago | (#15154436)

What's with "open" in the name of all these projects.


In this case it's an OSS version of a closed-source product called Virtuozzo, commonly abbreviated VZ. I think it's a perfectly descriptive name.

Re:OT question (1)

tomstdenis (446163) | more than 8 years ago | (#15154454)

Well if it's the closed project it's opened up.

If it's a clean-house implementation then it's not strictly based on it.

Call it something else like Vzeeforefree!

Dunno just annoyed at people abusing the OSS blanket for publicity.

Tom

Future Of Desktop/Workstations (-1, Flamebait)

Anonymous Coward | more than 8 years ago | (#15154334)

OS Virtualization is the death warrant for the OS X software market. Apple knows this.

Ideally, as the desktop OS market becomes no-growth commodity one, we would be able to run Linux as all of our primary OSes with Windows there for compatibility and Apple dumping OS X and migrating their desktop, APIs and software, over to Linux and Windows.

Ignoring the fact that Xcode is a steaming pile of shit for the moment...

It would be the best of all desktop worlds:

Windows is locked down and used only for compatibility when necessary.
The good parts of OS X live on and get to be used by everyone.
Linux solves the 'desktop problem' by dumping, or at least making irrelevant, KDE and Gnome

The era of the desktop/workstation is fading. Small, handheld mobile wireless devices is where the industry is heading.

Re:Future Of Desktop/Workstations (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15154435)

You make no sense at all. You want to run Linux, but with Windows XP and the OS-X desktop, api, and software? So what exactly do you need linux for?

Wait, you make some sense: solve the "desktop problem" by ditching Linux and using OS X or Windows XP.

Not Really (1)

zeketp (888795) | more than 8 years ago | (#15154580)

I see myself using virtualization to run Windows inside Mac OS X. Don't like Xcode? It happens to be built on top of the most commonly used compiler, GCC. It is just a front end to replace using terminal based text editors and prevents you from needing to remember all the options needed to run GCC from the command line. I'd say that if any OS dies out, it will be Windows first. If people can dual boot/virtualize Windows on Macs, the biggest obstacle in the way of mass Mac adoption is gone. I'm confident that once people get Macs and play around in OS X (it's inevitable) then many will start switching to OS X for everyday use. Developers think they can just switch to Windows? Not likely when it gradually becomes considered a burden to swap to Windows, and suddenly Mac compatibility becomes a feature! I think it is far easier to write an Objective C/Cocoa Framework app (including all the necessary under the hood work in addition to a GUI) than it is to write a Windows application with a GUI. Just want to tie good old C++ into a GUI? Xcode already can do, or you could just compile it to run on the command line. Want to use a command line editor and GCC? Already there. Want to compile for Linux and Windows? Look at GNU Step and related open source implementations of Cocoa. C/C++ is the foundation for Objective C, so you can jump right in after a basic tutorial on objects and message passing. I don't see linux going anywhere, maybe gaining ground as Windows boxes suddenly become obsolete because of new mac switchers or Vista's (and successor's) large jump in system requirements. Given the trend among my friends (all of us college students, tomorrow's leaders, etc.) then Mac market share is looking at a sharp increase in the near future. Heck, I'm in aerospace engineering, and I'm writing this from a Powerbook G4, when many small, home brewed applications I encounter require Windows. But you know what? No small home brewed app is more than a match for Virtual PC. Any app that is something important enough to pay for has a Mac version or equivalent.

Just who the hell (-1, Troll)

Anonymous Coward | more than 8 years ago | (#15154583)

let Dvorak post on slashdot? Typical Dvorak nonsense :p

Either that, or some seriously potent recreative drugs. Hard to tell, but the end result is more or less the same.

So it's a VMWare ESX Server clone ? (1)

MarkTina (611072) | more than 8 years ago | (#15154347)

What's the distinction between what he is talking about and something like VMWARE ESX ?

Price? (2, Insightful)

XanC (644172) | more than 8 years ago | (#15154391)

For one. VMWare ESX is quite expensive, I understand.

Re:So it's a VMWare ESX Server clone ? (3, Informative)

silas_moeckel (234313) | more than 8 years ago | (#15154413)

ESX is a lot thicker than openVZ meaning it's emulating a lot more so more overhead. ESX is also more flexable as it run run windows next to lnux next to solaris next to insurt x86 thing here assuming they can deal with it's limited scsi emulated hardware. OpenVZ on the other hand uses one kernel and one filesystem it's one step up from a chrooted jail with a lot of process type limitors similar to ESX. The single filesystem realy keeps drive usage down with a copy on write scheme for the virtuals and you can update all the virtuals at once by altering the base filesystem. OpenVZ was designed for there virtuoso product line thats tageted at hosting companies who have been the big adopters of virtulization as it's a lot safer to sell 1/10th of a 3k server than 10 300 buck "servers" where the 3k box has raid redundant psu's and only takes up one RU vs 10 minitowers taking up nearly a rack and consuming a lot more power with no redundancy.

Re:So it's a VMWare ESX Server clone ? (0)

Anonymous Coward | more than 8 years ago | (#15154875)

whoa. try taking a basic grammar class. i doubt anyone can understand one word of what you said.

Re:So it's a VMWare ESX Server clone ? (1)

hawg2k (628081) | more than 8 years ago | (#15154678)

VMWare's ESX server, and Xen as well, are called hypervizors. As I understand it, that's just a fancy name for a specialized appliance like OS. Basically, you can't do much with a hypervizor except get virtual machines up and running. From the article it sounds like OpenVZ requires the full blown Linux kernel as well as most of your basic GNU/Linux code. So, if I understand correctly, you could use the "host" as an actual computer as well as a virtual machine manager. Sounds like you get some COW type features etc. too, allowing for some file sharing between the host and the guest(s), if I understood the article correctly.

Re:So it's a VMWare ESX Server clone ? (1)

swmccracken (106576) | more than 8 years ago | (#15155398)

I think the main difference is the split between the hypervisor and userspace. (A hypervisor is a scheduler that manages multiple operating systems, each of which has their own scheduler. The original operating systems were called "supervisor programs", in case you're curious, so a supervisor-supervisor-program is a hypervisor. :-)

Under VMWare, each VM runs with its own complete kernel copy - each VM is a complete emulation of a computer, to the best of VMWare's ability.

Under OpenVZ, as far as I can tell, the same kernel is shared among the different VMs and they add extra "namespace" features to the kernel that allows the one kernel to segregate the virtual machines. Because it's still the one kernel, there's more efficency because the one kernel manages the virtual-memory to physical memory mapping and all the other hardware abstraction issues. (Instead of the double layer of the VM ("guest") OS to virtual hardware to the host OS to physical hardware.) If I'm right, this means that a kernel has to be well written to avoid VM contamination, a kernel panic will bring the whole system down (not just one VM), and you can't have different kernel versions or images in each VM.

What I want to know is "how is this in comparison to zones on Solaris [sun.com] ?" It looks a lot like that.

Virt is big (1, Interesting)

Anonymous Coward | more than 8 years ago | (#15154357)

I disagree, I think this is going to be big and is already starting.. In the corporate world, we have been moving many legacy systems onto VM's. Win2k3 also runs on VM's very nicely, great way to utilize that server you use for print/virus/iis, each having a seperate OS on same hardware. I think the VM buzz is really hitting much more mass right now. We are looking at mass roll-outs for desktops to get away from dual booting win/linux and would prefer to see this virtualised, as would clients.

Sweet (1)

Sqwubbsy (723014) | more than 8 years ago | (#15154362)

Just started mucking with Virtual Server 2005 R2 [microsoft.com] and have been pretty psyched about the results (especially with not having to req development machines which is nigh on impossible in my organization.)

But I don't see this is emulating an x86 machine, rather it seems to just be a Linux virtualization environment. Yes, I did RTFA, and I've looked at the website, but I'm wondering if another slashdotter has ever actually used the tool and can answer this.

Virtu. Linux/Windows Dual Boot (0)

Anonymous Coward | more than 8 years ago | (#15154378)

I dual-boot Windows XP and Linux. Is there a software virtualization solution that will allow me to convert/use my current Windows partition?
   

Re:Virtu. Linux/Windows Dual Boot (1)

Fuyu (107589) | more than 8 years ago | (#15154609)

VMware's P2V Assistant http://www.vmware.com/products/p2v/ [vmware.com] will allow you to convert your current Windows partition into a VM.

Re:Virtu. Linux/Windows Dual Boot (1)

BrainInAJar (584756) | more than 8 years ago | (#15154731)

xen 3 and an amd pacifica/intel VT chip?

wouldn't be the first time [taborcommunications.com]

A bit of bias... (5, Informative)

subreality (157447) | more than 8 years ago | (#15154418)

"why OpenVZ outshines the competition, comparing it to VServer, Xen and User Mode Linux."

Of course, Andrey works for the software company that wrote this thing, and their closed full-featured flavor, Virtuozzo. The VZ method is a good one, and has excellent performance, but it has its drawbacks, too. Personally, I don't like that my VPSes need to use my VPS provider's kernel, which lacks features I desperately want (like stateful iptables matching), and which forces me to reboot whenever they upgrade their kernel (my VPS can't be migrated to a host running a different kernel), and I can't upgrade until my provider does.

VServer, Xen, and UML all make different tradeoffs. VZ goes for performance. Saying one outshines the others is just trolling. That's mostly on the part of the /. submitter, but Andrey slants it a little too.

I don't want to crap on the OpenVZ project. They're working on very cool stuff, and I applaud SWSoft for opening the thing up. I just want people to keep the comparisons in context.

Linode (1)

XanC (644172) | more than 8 years ago | (#15154460)

You need to move to Linode.com, seriously. They don't have any of the problems you mention. It's all UML for now, although they have some Xen boxes in beta that you can get on.

Re:Linode (1)

subreality (157447) | more than 8 years ago | (#15155493)

And without knowing anything about what I'm doing, you make a recommendation for a service provider? My requirements are a bit more complex than that. :)

Re:A bit of bias... (0)

Anonymous Coward | more than 8 years ago | (#15154906)

IMHO OpenVZ doesn't go for performance, because their code adds bloat where it isn't required, and the network virtualization adds more overhead than necessary, no wonder that VServer outperforms it on many benchmarks.

Re:A bit of bias... (0)

Anonymous Coward | more than 8 years ago | (#15155261)

numbers please! :)

It's hot...it's coming...and you are left wonderin (1)

threedognit3 (854836) | more than 8 years ago | (#15154421)

Virt...is the real deal. A new way of doing things. Ground floor stuff but if you don't stay up on it...you lose.

This is the cool stuff, the amazing stuff.

Re:It's hot...it's coming...and you are left wonde (2, Informative)

MarkTina (611072) | more than 8 years ago | (#15154525)

You know that Virtualisation has been around longer than I've been alive .. it came from the mainframe world and "discovered" by the x86 crowd :-)

Re:It's hot...it's coming...and you are left wonde (1)

ovz_kir (946783) | more than 8 years ago | (#15155507)

You are damn right pal!

The obvious difference though is x86 crowd is now doing it in software, not in hardware -- and so it's much cheaper.

OS virtualization (4, Insightful)

Cthefuture (665326) | more than 8 years ago | (#15154424)

Unlike Xen or VMware this OpenVZ doesn't run a separate kernel for each virtual machine. This seems like a security risk to me. A kernel bug will affect all the running virtual machines. In other words, you only need to break one kernel and you have them all.

Plus you can't run different operating systems on each virtual machine.

It does have some positive benefits, it all really depends on what you are doing. I like the security of Xen and VMware better though.

Re:OS virtualization (1)

ovz_kir (946783) | more than 8 years ago | (#15155530)

Speaking of security, every major hosting service provider is using Virtuozzo, selling cheap VPSs (virtual environments) with root access for like $15/month, so every evil hacker out there can buy one and try to exploit the box. They can't -- otherwise all those HSPs will be in big trouble.

Why they can't -- because OpenVZ/Virtuozzo security is on a good level. We do care about security a lot.

Speaking of VMware and Xen -- there is still a single point of failure in those solutions -- VMWare itself and host OS in case of VMWare, and hypervisor and Dom0 in case of Xen. So, it is neither better nor worse in theory.

In practice, though, security is good when people do care about it. For obvious reasons (a lot of customers in HSP world) Virtuozzo (and OpenVZ) does care for security. But don't take my word for that -- go try it out, download and install OpenVZ, expose a few virtual environments to the outside world, give their passwords to everybody and see if they can break your system. Why not?

Obvious question: containers (0)

Anonymous Coward | more than 8 years ago | (#15154464)

Solaris 10 introduces the idea of "containers" [sun.com] , which seem to me to be a very close match to what this guy's talking about. Anybody know how they compare in terms of their isolation, their performance, and so on?

Re:Obvious question: containers (2, Interesting)

ovz_kir (946783) | more than 8 years ago | (#15155490)

Very short answer -- Solaris Containers is the same technology as OpenVZ or VServer. Their isolation is OK as well, their resource management is worse than that in OpenVZ. There are some system-wide resources that you can not limit for a containter -- which can create problem if an application inside a containter goes crazy (or a container is owned by a c00l ha>

Remember, Solaris Containers are a recent feature, while Virtuozzo was available as a product since year 2001. So, Solaris is doing the right things and great things, but it still has a way to go.

Perhaps they haven't heard, but Xen 3 is stable (4, Informative)

cduffy (652) | more than 8 years ago | (#15154473)

The interviewee keeps talking about Xen 3 like it's not out yet, but that's untrue.

Indeed, Xen 3 has been stable long enough that they're presently at 3.0.2. It's not prerelease anymore, and support for x86_64 and hardware-supported virtualization has been out and about for a while. I have semi-production (used by in-house staff only, but there are folks who can't work if it's down) systems running on Xen3 x86_64 DomUs, and the host they're on has been up (and running unattended) for 117 days now.

Sun has a OpenSolaris port to Xen (though I think it may be in-house-only still), and I have some good friends working on a microkernel OS targeted at embedded operation with a Xen DomU port pending (such that they -- and people working on it -- will be able to run it in parallel with the OS they use as their development platform). Being able to run more than one kernel -- indeed, more than one operating system -- is a big plus on the Xen side of things.

Imagine ... (2, Funny)

3dr (169908) | more than 8 years ago | (#15154559)

... a beowulf cluster of virtualization servers running beowulf clusters of VPSes!

Virtualization success (2, Insightful)

tallsails (549200) | more than 8 years ago | (#15154601)

Its amazing how low utilization of servers is. Developers love lots of servers, but don't use them nearly as much as they say... see article "Virtualization is the COOLEST thing" at http://blog.tallsails.com/ [tallsails.com]

Xen misconceptions (3, Informative)

jforest1 (966315) | more than 8 years ago | (#15154612)

Just to clarify: "Using Xen, you need to specify in advance the amount of memory for each virtual machine and create disk device and filesystem for it, and your abilities to change settings later on the fly are very limited." Xen supports a balloon driver that can allows for one to add or take away from the memory allocated to guest operating systems (DomU's). It is highly advised to us LVM2 to allocate disk space for DomUs, since it allows for easy changes to the partition. This makes file system management easier. "But most importantly, OpenVZ has the ability to access files and start from the host system programs inside VPS. It means that a damaged VPS (having lost network access or unbootable) can be easily repaired from the host system, and that a lot of operations related to management, configuring or software upgrade inside VPSs can be easily scripted and executed from the host system. In short, managing Xen virtual machines is like managing separate servers, but managing a group of VPSs on one computer is more like managing a single multi-user server." Using LVM2 as the disk manager as mentioned above, the host operating system (Dom0) can access the DomU's filesystem for troubleshooting and run programs (though it would not be run in the scope of the DomU, I'm not sure that he's actually implying that is the case with OpenVZ). --josh

Re:Xen misconceptions (2, Informative)

jlittle (122165) | more than 8 years ago | (#15154707)

Regarding running applications within the scope of a VE (DomU equivalent), yes he is. I extensively use both Virtuozzo and Xen. Each has their strengths. VZ allows efficient use of memory (shared memory across all VMs) as well as disk space, as binaries _can_ be shared with a copy on write file system. You can do a lot of this in Xen, but you can't mount a Xen domU filesystem in Dom0 when a DomU is using it. In OpenVZ, the filesystem is only mounted in the hardware node and exposed through an FS layer (copy-on-write) to the child VZs. Regardless of the state of the VM, you can enter into its state w/ a shell similar to a chroot. But you can fully execute commands from the hardware nodes context into the VZ context. The line separating the two is a process in OpenVZ. In Xen, its a full OS instance with private memory spaces. Its a double edged sword, but it has saved my ass in a few cases with OpenVZ.

Re:Xen misconceptions (1)

jamesh (87723) | more than 8 years ago | (#15154949)

but you can't mount a Xen domU filesystem in Dom0 when a DomU is using it

Can and do :). Use OCFS2, piece of cake to set up and the because Xen 3.0.2 is based on 2.6.16, it's already in the kernel tree.

Haven't used it as the root filesystem yet (just as a shared filesystem between domains), but when I do I will (in theory) be able to have 1 filesystem with 'per node symlinks' (ocfs2 calls them something else but that's what they are) so each node/domain can have a separate /etc, /var/run, /var/spool, and so on.

Re:Xen misconceptions (1, Informative)

Anonymous Coward | more than 8 years ago | (#15154975)

Using stuff like vservers (or openvz) is much simpler than xen. Sure with xen, you can mount the lvm volume to access file, but you can't do that while the virtual machine is running, unless you want to corrupt the file system in the volume. With vserver, you can do it since this is the same kernel.

Virtual machine like xen are useful, but vservers are much more useful. I mean, you really should look at vservers on any server you use, since the performance is the same (as a normal server). I have around 40 vservers on my notebook for various projects.

Stop thinking about virtualisation for hosting solution or major mega servers. virtualisation solves real world every day problems

-Project separation so you can update one without breaking the othe
-moving projects around. You develop on your workstation and move to production server as is later.
-clone a vserver and perform an update on the copy. If successful, turn off the original. Turn it back on
  when you realize you have a problem :-)

virtualisation (2, Informative)

Tinkster (831703) | more than 8 years ago | (#15154632)

... and then there's the outstanding IBM p-Series machines with their Hypervisor in
hardware that benefits from the aforementioned age-old mainframe technology :}

Yeah, but... (1)

countach (534280) | more than 8 years ago | (#15154666)

I don't doubt that OS-level virtualization is more efficient, but have you ever tried upgrading the OS for hundreds of applications at the same time? It's darned near impossible.

The great benefit of hardware level virtualization is that you can upgrade one app and one environment at a time. If app-"A" needs Linux 2.4 because that is what Oracle supports - fine, no problem. But if app-"B" needs to upgrade to Linux 2.6 because its reporting suite must have that version, that is ok too.

It seems to me that OS-level virtualization is a cool sounding idea that is pretty hopeless in the real world.

Re:Yeah, but... (1)

BlueLightning (442320) | more than 8 years ago | (#15155039)

It seems to me that OS-level virtualization is a cool sounding idea that is pretty hopeless in the real world.

It depends on the application. If you're talking about a web host running lots of web servers it might make sense to use this approach, since the guest systems are likely to be very similar if not the same.

Re:Yeah, but... (1)

countach (534280) | more than 8 years ago | (#15155184)

I guess if you think hard enough you'll think of a good application for it... but in the case of web server farms, what's the point of having multiple virtual environments unless you are going to open them up to your clients to install their own PHP or postgresql or mysql or whatever darned bit of web technology they want? If all you want is a bunch of web sites on virtual hosts, you can just use the apache virtual hosts function. But if you want to give clients a free for all, you basically have a massive headache to upgrade the OS later on.

Just what we need -- more kernel bloat (0, Troll)

Anonymous Coward | more than 8 years ago | (#15154720)

I'll tell you one thing -- I would like to see a lot less stuff "merged into the mainline Linux kernel." It's seldom done so that I can cleanly leave out the features I really don't need/want, and I always end up paying the overhead.

Re:Just what we need -- more kernel bloat (1)

ovz_kir (946783) | more than 8 years ago | (#15155441)

We surely do understand that.

All of the OpenVZ aspects and features (like User Beancounters) can be turned on or off in kernel .config. I.e. OpenVZ kernel can be compiled without (or with) any of OpenVZ features.

Very one-sided (1)

A Nun Must Cow Herd (963630) | more than 8 years ago | (#15154724)

As you would expect from such an interview, it ignores the advantages of products like VMWare Server which make them attractive over Virtuozzo (and OpenVZ). Hardware virtualization allows the guests to be independent of both host hardware and host OS. To us that alone is worth the trade-off in performance, and giving up the resource management that Virtuozzo has. With the enhanced support for virtualization in hardware (e.g. the new Intel and AMD CPUs) I expect that the performance difference between hardware and OS virtualization software will decrease, but the other advantages of hardware virtualization will remain. There must also be advantages in security and upgrade-management that come with being less dependent on the OS... ?

Re:Very one-sided (1)

ovz_kir (946783) | more than 8 years ago | (#15155435)

How VMWare can be independent of host OS if it runs on top of it? I mean there is a single point of failure here: if host OS dies every VMWare instance dies with it.

And the question is not just performance -- indeed, with hardware band-aids like AMD Pacifica and Intel VT performance will be better. The question is density, scalability, and manageability (it is funny you even mentioned it -- see below).

Density: you can run hundreds of virtual environments in OpenVZ, you can run tens of guests in VMWare. Makes sense?

Scalability: can VMWare effectively utilize "big hardware" like 64-way SMP box with 64 GB of RAM? OpenVZ can -- absolutely no problem, there are no additional SMP hacks needed etc. More to say, a single virtual environment can use all those resources if needed.

Manageability: From a sysadmin point of view, VMWare guest is just like a physical server. If you want to apply software updates, you have to log in into each one and run an update procedure. One by one, the very same way you'd do it with separate physical boxes. In contrast, in OpenVZ you can actually see and access all the virtual environments from the host OS, making mass-management possible. You can apply updates en masse. You can do mass-management. Makes sense?

Indeed, VMWare (or other solutions of the kind, like Parallels [parallels.com] or QEmu [bellard.free.fr] ) makes sense if you want to run different operating systems, different kernels etc. It makes much sense in development labs, at home or when you have just one server. But if you have a rack of servers -- OpenVZ/Virtuozzo/other solutions of the kind makes much more sense, due to the reasons cited above -- scalability, density, manageability.

"Virtualization" - in a sense (2, Informative)

ratboy666 (104074) | more than 8 years ago | (#15154734)

These are not virtual machines. The idea seems to be the same idea behind Solaris 10 Containers, and I wish that had been discussed (pros and cons) in the interview.

Easier management for vertical stacking of applications on a machine.

And, yes, it is VERY useful.

Not for typical home use though. At home, I use VMWare for virtualization, QEMU to run foreign code, and BOCHS to test x86 assembly sequences, all of which I do frequently. Stacking? Not so much, because my main server is a dual PPRO with 128MB -- httpd, imapd, file services, time services, etc. Not a heavy load (104 processes, easy enough to manage manually).

Ratboy.

FreeBSD Jails (2, Interesting)

Ragica (552891) | more than 8 years ago | (#15154825)

Sounds, once again, a lot like FreeBSD's jail [wikipedia.org] support (which has existed for many years now, and is very stable).

In what ways is OpenVZ different? I also wonder what their "commercial offering" adds... but i'm too lazy to look.

I run FreeBSD jails on my box for testing purposes. It's extremely easy to setup and administer, especially with many helper scripts available these days.

I am loving the simplicity of ezjail [erdgeist.org] . The coolest thing about it (besides the utter simplicity), is that it creates a "base jail" containing an entire FreeBSD install. From there it uses tricks with nullfs to mount parts of that base iinto jail 'instances'... this means each new jail takes only 2 megs of additional space, and about 1 second to create. It also adds security in that the base system remains absolutely read-only, while still permitting customisation and additional software to be installed in the jail.

I need a new virtual server to test my software:

ezjail-admin create new-jail-name 192.168.5.123

Then run the ezjail startup script. And SSH in to my new virtual server. (Note: i set up the default server template to enable SSH and a few default logins... very easy to do. One does not need to use SSH; one can get into the jail environment a few different ways.)

Re:FreeBSD Jails (0)

Anonymous Coward | more than 8 years ago | (#15154870)

that's not really different from how Linux-VServer does it:
in this case hard links are used to 'unify' the 'jails',
effectively reducing a new guest to only a few megabytes,
and the tools support similar install methods, by either
copying existing guests or installing them from network.

maybe one important difference the linux variants add
compared to BSD jails, is the resource management and the
better isolation (e.g. for IPC)

History again repeats itself.. (5, Informative)

Anonymous Coward | more than 8 years ago | (#15154887)

In the mid 60's IBM created CP-67 which virtualized the IBM S/360. In the following years the system became VM/370, and has evolved to z/VM today http://www.vm.ibm.com/ [ibm.com] . VM (the general term for z/VM) is made up of two primary components, VM/CP (control program) and VM/CMS (a mini single user operating system). VM/CMS provided the ground work for being able to administer the system, and provided a nice programming environment in that each VM/CMS user had their own "system" that one could edit, compile and run their programs in an interactive environment (think of a MS-DOS type of model -- then remember that this was in the late 60's).

CMS itself provided some limited simulation of IBM's two other mainframe operating systems OS/360 and DOS. Enough that one could write simple OS or DOS programs and do at least some unit testing. The simulation by CMS was by providing a limited set of the OS and DOS API.

Unlike MVS or DOS, (or even the CP/M, Windows, or *nix families) VM/CP itself does not provide many services directly. VM/CP does not provide any filesystems, any application APIs, etc. All VM/CP really did was to provide a barebone virtual machine and only provide those services one would find on the bare hardware. It was the responsibilty of the operating system running within the virtual machine to provide the application API, filesystems, application memory management, etc. Communication between vm's were originally only via the raw hardware model (channel-to-channel adapters, shared disk volumes, and a method of "punching" virtual cards and sending the virtual cards to another vm's virtual card reader.) As time progressed, VM/CP did provide some API's that allowed very simple messaging between two vm's (first VMCF - Virtual Machine Communication Facility, and then IUCV - Inter User Communication Vehicle).

Early on it was "discovered" that the virtual machine model made a lot of sense as a method to implement VM services. For example if one were to look at a modern VM system, you would see that the entire native VM TCP/IP stack is managed within a small collection of vm's. (Under VM/CP, a vm is called a "userid"). The native VM TCP/IP stack consists of a TCPIP userid that manages the network interface devices, and the TELNET server. The FTP userid implements the FTP protocol, etc. Each userid is totally seperate from the rest of the system and from each other (the tcp/ip socket facility "rides" on top of IUCV in a transparent fashion so that a tcp/ip server is coded the same as on *nix).

Because of the facilities provided by CMS, it is fairly easy to write little servers. For example the orginal LISTSERV server http://www.lsoft.com/products/listserv-history.asp / [lsoft.com] was written as a CMS application. As well as several native VM webservers.

If one wants to see what is and has been possible in a virtual machine environment, one should at least look at the history of IBM's VM.

For an excellent history of VM http://www.princeton.edu/~melinda/ [princeton.edu]
and the VMSHARE archive, an early BBS used by VM system adminshttp://vm.marist.edu/~vmshare/ [marist.edu]

Virtualization is the future (2, Insightful)

microbee (682094) | more than 8 years ago | (#15154951)

And it's coming. But I think VMWare and Xen got it right. OpenVZ tries to do it inside the OS, which makes OS too much more complicated. It's not going to scale.

Re:Virtualization is the future (1)

Forbman (794277) | more than 8 years ago | (#15155292)

Well, CoLinux works pretty good under Windows, better than Cygwin for sure. The only hitch is getting networking set up. The CoLinux wiki is a bad mashup of WinXP information. At one point I got it to work fine on a work computer under Windows 2000, but I tried the same at home (again, Win2K), and the colinux side does not connect to the net... *:(

Re:Virtualization is the future (0)

Anonymous Coward | more than 8 years ago | (#15155323)

Just note, that OpenVZ virtualization patch is smaller than any incremental Linux mainstream kernel update for minor version.
So it can be argued that it is complicated.
Scale? I suppose you are wrong as well. it scales as good as original linux kernel. can you run 100VMs on 1Gb RAM with VM technologies? I doubt :)

Re:Virtualization is the future (1)

ovz_kir (946783) | more than 8 years ago | (#15155374)

Not sure what do you mean by the term "scale". I can imagine the same phrase being said about a multiuser (or multitask) operating system: "that concept that system has multiple users (processes) makes OS too much more complicated". Well, you know that all this multi* stuff is a reality, and the next step in OS evolution is multiple virtual environments. Think of it for a minute.

Indeed, this is what guys like IBM did on a big million dollar mainframes. And this is what now possible to do on your laptop. And it makes sense.

Solaris already has this-- it's called Zones (0)

Anonymous Coward | more than 8 years ago | (#15154952)

Check it out -- download a copy of Solaris Express and give Zones a whirl. Another example of Linux playing catchup...

Re:Solaris already has this-- it's called Zones (0)

Anonymous Coward | more than 8 years ago | (#15155059)

Sorry to say that pal, but Linux is not catching up here, because projects like Linux-VServer exist for more than five years now, and solaris Zones are a very recent development ...

Re:Solaris already has this-- it's called Zones (1)

ovz_kir (946783) | more than 8 years ago | (#15155354)

Virtuozzo is in production since 2001, according to http://www.swsoft.com/en/company [swsoft.com] It is way ahead of Solaris Zones, which, by the way, still lacks proper resource management, similar to that found in OpenVZ/Virtuozzo. And why resource management is of paramount importance is described in Andrey's interview.

Following a well worn, but very productive, trail (2, Interesting)

karl.auerbach (157250) | more than 8 years ago | (#15154985)

It sounds like the *nix VM world is moving along the track established by Multics and IBM's CP/67 (later VM/370) projects.

It seems to me that the differences in the *nix approaches are mainly whether the abstract machine seen by user written code resembles a hardware machine or some nicer abstract machine.

In all VM approaches the idea that one can freeze an entire system and look at it, or isolate it, or migrate it, is a very valuable one. It's done well for IBM on their mainframes.

As for adding resources on the fly - way, way back (mid 1980's) Robin O'Neil and I did a System V based kernel for the Cray's out at Livermore. We had to run on top of the real OS, so we gave each user his/her own copy of Unix and create a file system that could grow or contract, adding, or removing inodes on the fly. And some of those inodes could reference files held by the underlying OS, thus making strange things, like "df" showing less space on the file system than was shown by a "du" summation of the file sizes in the file system. We published a paper on this at one of various Unix gatherings of the time.

So if we could expand file systems on the fly 20 years ago I don't see why it should be so hard to do today.

Now if we'd just get serious about capability architectures... (Much of the secure OS work of the '70's was done with capability architectures with hardware support such as the old Plessy machines.)

Just Imagine (1)

vga_init (589198) | more than 8 years ago | (#15155011)

Perhaps I misunderstand virtualization, but this is what came to my mind after reading about it:

Imagine that in the future nearly every application will be run inside its own private virtual systems. This will be done to improve security, scalability, etc etc. For very complex applications, this will improve the stability of the system as a whole!

Re:Just Imagine (0)

Anonymous Coward | more than 8 years ago | (#15155022)

Yep -- take a look at the "history repeats itself" above. The IBM VM system has been doing this since the late 60's

Hate to say it, but it is not true virtualization (2, Insightful)

solarappleman (950777) | more than 8 years ago | (#15155205)

Running single instance of kernel, I run single OS yet. They can mimic all benefits of virtualization on this level, but basic security improvement I obtain is nothing more than a fancy variation of process privileges separation, achieved by cost of immense additional complexity and waste of resources.

Basically, I would never jump into separating everything around just to make things safe, unless I look for a fancy way to mess up.

But for sure, this tool can be very useful for some cases.

Re:Hate to say it, but it is not true virtualizati (1)

ovz_kir (946783) | more than 8 years ago | (#15155286)

I'm not quite following you. What do you mean by "true virtualization"? Emulation? First of all, "virtualization" is a broad concept, it means making something that is not real look like real. Virtuozzo and OpenVZ does just that. From a point of view of a virtual environment only, it looks pretty much like a real server (with the only exception he can not use another kernel and/or load kernel modules).

Speaking of security, Virtuozzo is used by almost every major hosting service provider, and they sell cheap VPSs. If the level of security isolation provided by VZ is not strong enough, all those providers are screwed.

OpenVZ has undergone a throughout security review by a leading security expert Solar Designer last year; some bugs (including a few bugs in the mainstream Linux kernel 2.6) were found and fixed (and submitted to mainstream). Of course that does not mean it is free of bugs -- so I urge you to give it a try and find it out for yourself.

In theory the concept of OS-level virtualization is not weaker than other approaches as it comes for security. In practice, one should take a lot of care to make sure his software is secure. We at OpenVZ do care much for security, because it is a vital feature of OpenVZ (and Virtuozzo, for that matter).

Just today I was looking at virtualization... (0)

Anonymous Coward | more than 8 years ago | (#15155294)

Coincidently, I was just looking at virtualization options for Linux but for embedded devices. I came across Iquana (http://www.ertos.nicta.com.au/software/kenge/igua na-project/latest/ [nicta.com.au] ), which I consider very interesting.

Virtualization is no silver bullet (1)

ufoot (600287) | more than 8 years ago | (#15155296)

Well, the question is, why virtualization? While it can be very usefull from my developper's point of view, getting rid of headaches installing a bazillions OSes on a single computer to test out your program with Win98, WinXP, Red Hat, Mandrake, Debian, FreeBSD, and possibly OS/X, I see little gain from my software end user's point of view.

My primary OS is GNU/Linux, I have pretty much all the applications I want on it, and never really feel the need to use a specific, dedicated Windows application. Now *some* applications really need Windows and/or OS/X, most of them being linked to hardware. I mean, WIFI does not work on my linux-ppc laptop. Well, what would I gain with virtualization? Running OS/X on top of a Linux kernel won't help, for OS/X won't access the hardware directly, after all, that's what virtualization is about, isn't it? The other solution is to run OS/X as a primary OS and use a Linux kernel on top of it. But then, well, unless the virtualization is absolutely perfect and runs at 100% and costs 0 byte of RAM, I'll loose some performance using 99% of my applications. Not acceptable either.

My conclusion is that while virtualization is very usefull in a corporate context, eg you want to separate environnements, ease up backups, increase security, have 10 different OSes installed on one server for testing purposes, whatever, it fails to fully replace double boot. The main reason is that the role of a kernel is not only to launch programs, but also to provide programs some form of access to the hardware. And virtualization is just about denying direct access to the hardware.

Double/triple booting is far from disappearing...

WWOT? fp.. (-1, Offtopic)

Anonymous Coward | more than 8 years ago | (#15155317)

fear the 8eaper duty to be a big
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>