Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Virtual Containerization

kdawson posted more than 6 years ago | from the v12n-is-for-c14n dept.

Linux Business 185

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."

cancel ×

185 comments

Message from your virtual foe (0, Troll)

Adolf Hitroll (562418) | more than 6 years ago | (#19967779)

"In the name of Allah, the Merciful, the Beneficent: A Message to our Muslim brothers in Iraq. Alsalam Alikom Wa Rahmat Allah wa Barakato. (Quran verse) Oh Believers, be pious to God, and never die but when you are believers in Islam.

We are following with utmost concern the Crusaders' preparations to occupy the former capital of Islam (Baghdad), loot the fortunes of the Muslims and install a puppet regime on you that follows its masters in Washington and Tel Aviv like the rest of the treacherous puppet Arab governments as a prelude to the formation of Greater Israel.

We need to reassure -- while we are close to the unjust war, the war of the bawds, America is leading with its allies and agents -- on a number of important lessons:

First, to be honest in intention that the fighting would be for the sake of God, not to triumph for nationalism or pagan regimes in all the Arab countries, including Iraq. God said in his book, 'Those who are the believers fight for the sake of God. Those who are infidels fight for the sake of the juggernaut. Fight the followers of the devil. The devil's cause is weak.'

Second, remember that victory comes only from God. We have to exert all efforts with preparations, stimulation, and jihad. God said, 'O believers, if you fight for the sake of God, God will grant you victory and make your standing firm.' Therefore, you are obligated to hurry up to ask for God's forgiveness from all sins, especially the great ones. The Prophet said, 'Avoid falling in the seven great sins, which are: believing in any but God, magic, murder, usury, stealing orphans' money, fleeing from battle, slandering believing women ... besides, drinking alcohol, adultery, not obeying the parents, and false testimony.' You should be obedient in general.

Third, we recognized after fighting and defending ourselves from the American enemy that it depends on its fighting mainly in psychological war for the huge propaganda machine it has, and it also depends on the heavy air bombing. America uses these two in order to hide its soldiers' weaknesses, which are fear, cowardice, and lack of fighting spirit. These soldiers are totally convinced in their unjust cause and their unjust lying government. They also lack a just cause to fight for its sake.

They are fighting only to serve the interest of those who have the capital, arms dealers, oil owners, including the criminal gang in the White House. Adding to that, those who keep their personal envoys, Bush the father.

We have recognized that one of the best, effective, and available means to devoid the aerial force of the crusading enemy of its content is by digging large numbers of trenches and camouflaging them in huge numbers, as I previously referred to in my past talk of the Tora Bora battle last year. Such a great battle where the faithful achieved victory over all material forces. We did that by holding firm our principles, and with God's help.

I will recall one part of such a great battle to prove how much they (American soldiers) are cowards, in one side, and how effective are these trenches in depleting them from another side. We were 300 mojahideen (holy fighters). We were digging 100 ditches spread over an area of one mile only. The range is one ditch for every three brothers. To avoid grave human losses during the air bombing as our centers were exposed in the first hour of the American warfare in Oct. 7, 2001 -- to a heavy concentrated shelling, which then turned sporadic during the middle of Ramadan. Then on Ramadan 17, the shelling turned to a very heavy one, especially after the American command was certain that some of al-Qaida leaders are in Tora Bora, including the poor slave (talking about himself) and the holy fighter doctor Ayman el-Zawahri.

The bombing lasted 24 hours a day. No second passed without aircrafts passing over our heads day and night, as the headquarters in U.S. Defense Ministry with all other allies had nothing to do but to bomb and destroy that tiny spot and clear it from existence. The aircrafts were spilling bombs over us, especially after it finished its main mission in Afghanistan.

The American forces were bombing us with smart bombs, cluster bombs, and bombs which invade caves. B-52 aircraft were flying every two hours over our heads and throwing each time, 20 to 30 bombs. The modified Sinmo 13 aircrafts were bombing us daily with new bombs. Despite such a heavy shelling with the horrible propaganda, the first of its kind, on such a small zone surrounded from all sides, in addition to the forces of the hypocrites which were pushed to fight us for a continual half a month, which we faced their daily waves, despite all that, they (American soldiers) turned back carrying their killed and injured soldiers. The American troops couldn't dare to invade our bases, which indicates their cowardice, fear, and the false myths they spread concerning their military capabilities.

The conclusion is an enormous defeat for the coalition of the international evil with all its forces facing such a small group of mujahedeen, 300 only in ditches in an area of one mile, in a temperature of 10 degrees below zero.

The result of that battle was 6 percent injuries among the individuals, whom we ask God to consider as martyrs, and injuries inside the ditches were 2 percent only, thank God.

If all the evil global powers were not capable of defeating one simple mile occupied by mujahedeen using very poor equipment, how can such evil powers triumph over the Islamic world?

This is impossible, God willing, if they hold their faith in their religion, and were determined to fight for the sake of God.

Our mujahedeen brothers in Iraq, don't worry about American lies concerning their power and their smart bombs and laser ones. Such smart bombs have no use among the mountains, trenches, plains, and forests. They need an obvious target.

As for the well-camouflaged trenches, the smart or the idiot bombs can't do anything to it. The only way is haphazard bombing which depletes the enemy's ammunition and the enemy's money. So go and dig many trenches as it was mentioned before in the holy book, 'Take the earth as your shelter.' Such a way will deplete all your enemy reserves in a few months. As for their daily production, that is easy to bear, God willing.

We advise about the importance of drawing the enemy into long, close and exhausting fighting, taking advantage of camouflaged positions in plains, farms, mountains and cities. The enemy fears the most the town fights and street fights. Such fighting would cause the enemy huge losses of souls.

We stress the importance of martyrdom operations against the enemy, these attacks that have scared Americans and Israelis like never before.

We also make it clear that anyone who helps America, from the Iraqi hypocrites (opposition) or Arab rulers, whoever fights with them or offers them bases or administrative assistance, or any kind of support or help, even if only with words, to kill Muslims in Iraq, should know that he is an apostate and that (shedding) his blood and money is permissible (in Islam).

God said (in the Quran) : 'Oh believers, do not take Jews or Christians as your masters. They are loyal to each other. Those who follow them is one of them. God does not proselytize the unjust nations.'

I also assure those true Muslims should act, incite and mobilize the nation in such great events, hot conditions, in order to break free from the slavery of these tyrannic and apostate regimes, which is enslaved by America, in order to establish the rule of Allah on Earth. Among regions ready for liberation are Jordan, Morocco, Nigeria, the country of the two shrines (Saudi Arabia), Yemen and Pakistan.

You know that such a crusade war concerns the Muslim nation mainly, regardless of whether the socialist party and Saddam remain or go. So Muslims in general and Iraq in particular must pull up your pant legs for jihad against this unjust campaign. You should also keep the ammunitions and weapons, as it is an obligatory mission.

It is known before that you should not fight raising the pagan banners, but you have to, as a Muslim, to have a clear faith and banner during war for the sake of God, as the Prophet said, 'Whoever fights should raise the word of God.' It is not harmful in such conditions for the Muslims' interests and socialists' interests to come along with each other during the war against the crusade, without changing our faith and our declaration that socialists are infidels. Socialists' leadership had fallen down a long time ago. Socialists are infidels wherever they are, either in Baghdad or Aden. Such war which may take place these days is similar to the war between Muslims and Romans when the interests of the Muslims came along with the interests of the Persians who both fought against the Romans. Nothing was harmful for the Companions of the Prophet.

Before I conclude, I would like to assure on the importance of encouragement (for mujahedeen) and raising their spirits and being alerted from flickering, confusion, and disinclining. The Prophet said once, 'Encourage them and don't discourage.'

(For three times he said) 'God, who sent down the book (Quran), who ran the clouds, who defeated the parties, defeat them (enemy) and grant us the triumph over them.'

The great thing (4, Funny)

saibot834 (1061528) | more than 6 years ago | (#19967797)

The great thing about virtual machines is that you basically can do whatever you want with them. Things you'd normally never do to your computer.

It's only lacking a feature of throwing the virtual computer out of the window.

Re:The great thing (1)

Nikron (888774) | more than 6 years ago | (#19967827)

You are obviously deprived... Don't have a couple of old PCs to screw around with if you mood strikes you?

Re:The great thing (2, Insightful)

MalHavoc (590724) | more than 6 years ago | (#19967851)

It's only lacking a feature of throwing the virtual computer out of the window.


You sort of get this feature with Parallels - the ability to drag a virtual server into a trash bin is almost as satisfying, and far less expensive.

Re:The great thing (0)

Anonymous Coward | more than 6 years ago | (#19967877)

And if you are willing to pay for a CD-R, you can write your virtual machine to it, kick it, smash it (always liked to do that with floppies), or burn it [youtube.com] alive.

Re:The great thing (4, Funny)

camperdave (969942) | more than 6 years ago | (#19968365)

So, all we have to do is replace the trash can icon with an icon of a window, and we're set. Plus, if it can play the sound of glass breaking, a scream, and a dull thud as well... well, then you're virtually there.

Re:The great thing (5, Funny)

Oktober Sunset (838224) | more than 6 years ago | (#19968463)

My computers don't usually scream when they are thrown out of the window, plus it's more of a crash than a thud when they land. Are you sure you aren't throwing your colleagues out of the window, I know a lot of office workers being dull and beige, can be mistaken for computers easily.

Re:The great thing (2, Funny)

Ant P. (974313) | more than 6 years ago | (#19968547)

My computers don't scream either, but the people in their trajectory usually do...

Re:The great thing (1, Funny)

Anonymous Coward | more than 6 years ago | (#19969499)

So, all we have to do is replace the trash can icon with an icon of a window, and we're set.

Moreover, we could replace the trash can icon with the Windows logo, for wider metaphor applicability. They are typically full of the same, after all.

Re:The great thing (2)

morgan_greywolf (835522) | more than 6 years ago | (#19967895)

Really. You can run applications in their own protected space, sealed off from the 'real' computer. I do this a lot -- I have QEMU-virtualized Windows XP and Linux machines that I can try all kinds of garbage in. I just back up the image file, and when/if I totally mess the thing up -- 'cp winxp-qemu.img.old winxp-qemu.img', for instance. Nice and simple.

Re:The great thing (5, Funny)

niceone (992278) | more than 6 years ago | (#19967901)

The great thing about virtual machines is that you basically can do whatever you want with them. Things you'd normally never do to your computer.

Same as virtual girlfriends.

This is very true. (0)

Anonymous Coward | more than 6 years ago | (#19968077)

I've been using an XP SP2 vm for downloading Cory Doctorow nudes. Wife needs to use the pc? She just uses the host OS, and my precious guest OS goes untouched,

Re:The great thing (-1, Troll)

Anonymous Coward | more than 6 years ago | (#19968165)

I believe this [imageshack.us] comic sums it up perfectly.

VM/370 guy here.... drop dead micro-brains (0, Troll)

Anonymous Coward | more than 6 years ago | (#19968263)

We in the mainframe VM world have been doing this for 40 years. I get a kick of you microcomputer idiots constantly reinventing everything... badly.

Containerization (5, Funny)

Anonymous Coward | more than 6 years ago | (#19967801)

Sure, containerization might sound like a good idea... but if you find the word 'containerization' ugly NOW, wait until you see what furry abominations grow in the containers you forget about at the back of the work server for 2 months. >_>

Re:Containerization (3, Insightful)

mdd4696 (1017728) | more than 6 years ago | (#19967985)

Wouldn't a better word for "containerization" be "encapsulation"?

Whatever happened to "Sandboxing?" (4, Interesting)

JonTurner (178845) | more than 6 years ago | (#19968233)

Isn't this de facto evidence that the sandboxing, which was supposed to be a key component of both Java and .Net's security models, has either failed to deliver on their promises, or simply isn't adequately well engineered to provide protection against rogue applications?

As has been said before, we need a way to grant applications permissions to use resources. We have that, to some degree, with firewalls and apps like ZoneAlarm/LittleSnitch which ask you for permission before an application is allowed to "call home", but what about other resources -- for example, being able to access only a particular directory or install a system-level event hook which acts as a keylogger? etc.

Re:Whatever happened to "Sandboxing?" (1)

Skrynesaver (994435) | more than 6 years ago | (#19968315)

Indeed or chroot jails ? Sun's containerizationing solution [sun.com]

Re:Whatever happened to "Sandboxing?" (1)

ckaminski (82854) | more than 6 years ago | (#19969521)

Sun's containerization (or OpenVZ to similar extent) is exactly what we want from our OSes. 90% of our problems in the server space come not from the overly broad power of our operating systems and frameworks, but from our default policy of "grant everything, and deny only the bad stuff". If we treated Firewalls like we treated our application servers, well, we're seeing exactly what the result it.

Java and .NET sandboxing does work, to an extent, but other than the web arena, it doesn't apply to server hosted applications. If application servers like jboss could enforce a sandbox that would be a step up, but they cannot. Java/.NET do not know they need to be sandboxed to such and such a directory. They operate at a level far above where this functionality needs to be.

When we can get to a point and say, Application XYZ can access port 443, 80, on IPs 10.3.90.1-6 and access any file in /ApplicationXYZ, connect to database server db1 via mysql version 4.1 and db2 via MDAC 2.6, we can have more robust software architectures. In my opinion, the architecture of Windows precludes this at this time. So unix platforms with OpenVZ like support will evolve to support this functionality.

Or maybe not. Maybe we take the easy way out. Containers are easier to develop, to architect, to use as end users than some SELinux+ for applications.

Re:Whatever happened to "Sandboxing?" (2, Insightful)

Sancho (17056) | more than 6 years ago | (#19969547)

chroot jails tend to be restrictive. You can't access all your entries in /dev, or if you can, you've removed a lot of the protection afforded by the jail in the first place.

Virtualization (or containerization... how awful!) generally allows this. Want to play with your hard drive driver? No problem.

Of course, it fails when you actually /do/ want direct access to the hardware. Can't test that new Nvidia driver in a containerized OS.

Re:Whatever happened to "Sandboxing?" (5, Insightful)

TheRaven64 (641858) | more than 6 years ago | (#19968323)

I think it's more evidence that operating systems suck. The whole point of a modern operating system is to allow you to run multiple programs at once, without them interfering with each other. This is why we have filesystems (with permissions) rather than letting each process write to the raw device. This is why we have pre-emptive multitasking rather than letting each process use as much CPU as it wants. This is why we have protected memory, instead of letting processes trample each others' address space.

If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

Re:Whatever happened to "Sandboxing?" (3, Insightful)

afidel (530433) | more than 6 years ago | (#19968605)

That's funny because ALL OS's suck (infact all hardware and software suck, some just suck less). Even on the S/390 nee zOS mainframes from IBM there is compartmentalization both in hardware and software. If an OS that's been around for over 40 years running the largest companies in the world isn't always trusted to enforce separation of processes I don't see how any other OS stands a chance.

Re:Whatever happened to "Sandboxing?" (4, Insightful)

pla (258480) | more than 6 years ago | (#19968943)

If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

And for the most part, modern OSs handle that well. They do allow for a certain degree of IPC, but mostly, two processes not strongly competing for the same resources can run simultaneously just fine.

The problem arises in knowing what programs need what access... The OS can't make that call (without resorting to running 100% signed binaries, and even then, I personally lump plenty of "legitimate" programs in the "useless or slightly malicious" category), and we obviously can't trust the applications to say what they need. Most programs, for example, will need to at least write in their own directory, many need access to your home dir, some create files in your temp directory, some need to write in random places around the machine, some need logging access, some even need to write directly to system directories; Some programs need network access, but the majority don't (even though they may want to use it - I don't care if Excel wants to phone home, I don't use any of its features that would require network access and would prefer to outright block them). How does the OS know which to consider legitimate and which to disallow?

The concepts of chroot(and now registry) jails and outbound firewalling work well, as long as the user knows exactly what resources a given program will need access to; But even IT pros often don't know that ahead of time, and many well-behaved programs still snoop around in places you wouldn't expect.

The problem mentioned by the GP, with the likes of Java and .NET, arise from them still running on the real machine - They may waste CPU cycles running on a virtual CPU with what amounts to chroot'ed memory, but all of their actions still occur on the real system. Deleting a file really deletes a file.

"real" VMs basically avoid the entire issue by letting even a highly malicious program do whatever it wants to a fake machine. They can have full unlimited access, but any damage ends when you halt the VM. Repair of worst-case destruction requires nothing more than overwriting your machine image file with a clean version (you could argue the same for a real machine, but "copy clean.vm current.vm" takes a hell of a lot less time than installing Win2k3, MSSQL, IIS, Exchange, and whatever else you might have running on a random server, from scratch.



Or, to take your argument one layer lower, I would tend to consider XP the untrusted app, and VMWare the OS.

Re:Whatever happened to "Sandboxing?" (1)

boris111 (837756) | more than 6 years ago | (#19968715)

Good point. Java led the way in "virtualization", but one of the main hooks (I feel) that makes VMWare so desirable is the ability to take ONE image file and move it to another server without any hassle. Is there anything in Java that simple?

Re:Whatever happened to "Sandboxing?" (0)

Anonymous Coward | more than 6 years ago | (#19968793)

Is there anything in Java that simple?
HelloWorld.class

Re:Whatever happened to "Sandboxing?" (1)

Verte (1053342) | more than 6 years ago | (#19969349)

Simple capability management? I expect we'll get there eventually. Most of the microkernels in development today have this functionality built in. On the other hand, they also have the possibly-vaporware feature built in ;) Expect to live with VMs until Linux goes the way of the dinosaur, in 64 million years.

Re:Containerization (1)

Red Flayer (890720) | more than 6 years ago | (#19968573)

Or even sequestration?

Re:Containerization (1)

jaweekes (938376) | more than 6 years ago | (#19969021)

Why use a word which works well when you can misuse an existing one [wikipedia.org] and confuse everyone?

Containerization is an ugly word... (1)

KevinColyer (883316) | more than 6 years ago | (#19969687)

so why use it when words like Isolation and Encapsulation do the job very well???

Re:Containerization (0)

Anonymous Coward | more than 6 years ago | (#19969803)

How about "containment"? I mean, that's what containers do, they contain.

But this solution is probably too simple to be moderatorized up.

Hmmm. Actually now I notice "containerization" doesn't directly equal "containment" after all. (The former refers to the application of bona fide containers, the latter just containing no matter how.) How about "containeering"? "Containerizement?" "Container-Fu?" Aaargh I give up -- YAY ENCAPSULATION!!!

Contain (3, Informative)

Anonymous Coward | more than 6 years ago | (#19967869)

The word is contain, people, not containerization.

Re:Contain (2)

goombah99 (560566) | more than 6 years ago | (#19968711)

Contain contains a conceptual context that must be decontextualized and dereified. It's reality becomes process not product an the virtual world of containerization. In short Contain has lot's its content.
--beatnik avatar.

Re:Contain (2, Informative)

Hal_Porter (817932) | more than 6 years ago | (#19969039)

You'll never be able to accumulatarize consultancy dollars if you speak like some hick from the Mid West. Take your Mactop to your favourite ReCaPrO, get yourself a vegan skinny hicaf latte and start learning the lingo from the blargocube.

Re:Contain (0)

Anonymous Coward | more than 6 years ago | (#19969173)

Lolwut? "Contain" is a verb, "containerization" is a noun. I would suggest "containment."

Re:Contain (2, Informative)

Bohnanza (523456) | more than 6 years ago | (#19969231)

"Containment" would even work.

Re:Contain (1)

asifyoucare (302582) | more than 6 years ago | (#19969371)

What he said, "containment". Certainly not "contain", a verb, to match "containerization", a noun.

Drunk and listening to Ten Years After!

Isn't this bad for performance? (1, Interesting)

Anonymous Coward | more than 6 years ago | (#19967899)

If you're "containerizing" every aspect of your system, doesn't this have big performance problems? CPU cache, message passing, memory management, DMA, IRQs, whatever?

What was wrong with traditional privilege isolation in Linux systems (running processes as different users, chroot, etc)?

Re:Isn't this bad for performance? (1)

afidel (530433) | more than 6 years ago | (#19968691)

The reality is it doesn't matter for the vast majority of applications. In most datacenters that haven't done virtualized consolidation the average box is probably 1-30% utilized most of the time. The realities of large scale server deployments are that boxes are generally assigned to applications or projects and going back to load additional software on that box involves so much cost in testing and carries enough risk that buying additional hardware for the new app/project is downright cheap in comparison. It's only through technologies like Solaris containers and VMWare that many shops are able to get a grip on the server room sprawl. I know in my shop we put in 90 servers last year going from 63 to 153, if we had been using containers and VMWare it would have probably been 1/3rd that.

VM's just allow so many opportunities (4, Interesting)

inflex (123318) | more than 6 years ago | (#19967917)

As a software developer, being able to take snapshots, clone, pause, rewind (via snapshots) and backup makes VM'ing worth the cost in CPU/performance.

It's proved so useful that I'm sincerely considering doing the same for my actual WWW server so that if at any given time things go -bad- on the device I can just either roll back or transparently transfer to another machine, the latter, due to the (mostly) hardware agnostic nature of the VM setup makes disaster recovery just that much simpler (sure, you still have to setup the host but at least it's a simpler process than redoing every tiny little trinket again).

As another software developer... (5, Insightful)

Nursie (632944) | more than 6 years ago | (#19968017)

... that develops applications, mostly in C, I also find it extremely useful, especially when installing software. Some installers change the state of the system, some problems only occur first time round. There is nothing else like the ability to take your blank windows VM, copy it, install stuff, screw around with it in every possible way and then when you're done just delete the thing. They also allow you to install stuff you just don't want on your native box, but need to develop against.

And you still have that blank windows install to clone again when you need it.

VMs are a fantastic dev tool.

Re:As another software developer... (3, Insightful)

inflex (123318) | more than 6 years ago | (#19968121)

I was nodding my head in agreement. Writing installers for your apps often takes longer than the app itself (or they're larger!), so yes, (also a C developer myself) being able to test the install, roll-back, try again... brilliant stuff.

Re:As another software developer... (1)

Wiseazz (267052) | more than 6 years ago | (#19969051)

And let's not forget the uninstall - it is frustrating in the extreme to test a complex install/uninstall cycle on a real machine. I never get the uninstall right the first time through, potentially leaving scattered remains of your app around to hose your next attempt at installing/testing.

The easier it is to test these things, then the more likely you're going to end up with a quality product. If it takes me a half hour to install, test, uninstall, test, clean-up, etc., etc. then it's likely I'm not going to do it as much as I probably *should* have. VMs allow me to not only test more often, but more completely with a broader range of scenarios.

Also for QA. (3, Interesting)

antdude (79039) | more than 6 years ago | (#19969659)

Many QA people, including myself, use VM as well. Very useful with buggy builds. The best part is sharing the image. I can send a copy of my image to a developer with the reproduced issues without having him/her to come over to see it on my real machine. We still use real machines for testing, but VM is useful.

As yet another software developer... (1)

Gazzonyx (982402) | more than 6 years ago | (#19968425)

Indeed! I was programming an app which required me to test it on a completely clean windows box, as well as different patch levels (vanilla, SP1, SP2, current) for both Home and Pro versions, which meant that I'd have to reinstall after each test run. With being able to install each from CD, snapshot the clean machine, and then zip a copy of the folder and drop it over to my server in case I killed or corrupted the initial snapshot, I could have a clean machine after each run within a few seconds. Furthermore using VMWare, I could mount an ISO to all the virtual machines as the CD ROM drive, and then I just had to compile and drop the binary into the ISO and it was ready on all 8 iterations of Windows. Lastly, (and the first on topic thing I'll say) due to the nature of the project, I had to infect the Windows virtual machines while they were on my dev box (for lack of another sufficiently powered box at that time), which is great when I'm physically (at the file system level) removed from an infected box! Without VMware, I'd still be writing the app.

Re:As another software developer... (0)

Anonymous Coward | more than 6 years ago | (#19968495)

VM as a dev tool? It is a sorry OS that actually need that. I can write software, test it - and it simply don't screw up the system. Well, unless it is something really special like a file-system resizer or similiar system tool. An *app* sure don't screw up the system, no matter how buggy it might be. And uninstall is not a problem . . .

Re:As another software developer... (1)

BVis (267028) | more than 6 years ago | (#19968589)

It is a sorry OS indeed. Sadly, that OS is installed on more than 90% of the world's desktops, so if you're a developer and you want your software to be used and/or sold, you're stuck on Windows.

Apps screw up the system all the time by hooking calls, inserting themselves in networking chains, or leaving cruft behind in the registry. When you're building an uninstaller, you have to make sure it grabs all this junk and leaves the system in a reasonable state, and that's where a VM has its usefullness; you can sit there and install/uninstall/debug/install/uninstall/debug all day long, and be SURE that you're starting with a clean slate each time.

Re:VM's just allow so many opportunities (0)

Anonymous Coward | more than 6 years ago | (#19968447)

the (mostly) hardware agnostic nature of the VM setup makes disaster recovery just that much simpler

I am currently working on this for a client. She depends on her computer for business and downtime costs her. I have used Ghost for years to image the system and all the software she depends on but it is all Windows-based; hard disk failures are not a problem, but a bad motherboard needs a complete reinstall and reconfigure. I am moving her to a virtual environment to ease backup and recovery issues.

Containerization != Virtualization (1, Insightful)

tgd (2822) | more than 6 years ago | (#19967943)

I'm sorry, thats an attempt to jump on the virtualization bandwagon. Use that word these days, people throw money at you.

Application isolation is not virtualization, its nothing more than shimming the application with band aid APIs that fix deficiencies in the original APIs. Calling it virtualization is a marketing and VC-focused strategy, it has nothing to do with the technology.

Re:Containerization != Virtualization (0)

Anonymous Coward | more than 6 years ago | (#19968277)

The. Applications. Are. Running. On. A. (wait for it) Virtual. Machine.

When something is running on a Virtual Machine, that means it's being virtualized.

i.e.: what the fuck are you talking about, you insipid retard?

Virtualization = Containerization (1)

Alphager (957739) | more than 6 years ago | (#19968561)

Sure, but if you use a VM for each application, you have easy containerization.

Re:Containerization != Virtualization (1)

postbigbang (761081) | more than 6 years ago | (#19968623)

No, virtualization allows application instanciation, and therefore 'containerizes' the application instance as an atomic/discrete entity for manipulation.

It also abstracts the instance from a physical hardware location, provided uniform hardware resource needs. It also permits throttling application resources, or conversely, changing application resource capacities nearly in an ad hoc way.

If you accept this premise, contains are an effect of virtualization and a mathematical relationship shows containers as a subset and a by-product of virtualizing-- a subset of functionality.

I'd say it's both (4, Informative)

Toreo asesino (951231) | more than 6 years ago | (#19967945)

I've used virtualization for both containerisation and also to consolidate boxes too...

At my previous company, we invested in two almighty servers with absolutely stacks of RAM in a failover cluster. They ran 4-5 other servers for critical tasks...each virtual machine was stored on a shared RAID5 array. If anything critical happened to the real server, the virtual servers would be switched to the next real server and everything was back up again in seconds. The system was fully automated too, and frankly, it saved having to buy several not-so-meaty boxes while not losing much redundancy and giving very quick scalability (want one more virtual server? 5 minute job. want more performance? Upgrade redundant box and switch over virtual machines).

The system worked a treat, and frankly, the size & power of the bigger, more important fewer servers gave me a constant hard-on.

Re:I'd say it's both (1)

swb (14022) | more than 6 years ago | (#19968965)

And one enables the other. You really want to be able to dedicate boxes to specific services, but you also can't have a zillion boxes. VMs allow some slack to at least get the most annoying (*cough*BES*cough*) and least cooperative stuff on their own boxes.

Re:I'd say it's both (2, Interesting)

good soldier svejk (571730) | more than 6 years ago | (#19969261)

In fact I'd say that in my data center the driver used to be containerization and is increasingly consolidation. The reasons are radically increased power costs and increasingly complex disaster recovery issues. Virtualization offers significant advantages in both areas.

I guess that's true (1)

smchris (464899) | more than 6 years ago | (#19967955)

I've only had an X86 box at home since the 80s and only this year putting XP Pro on a qemu cylinder with a Samba share _finally_ got me to rigidly separate the OS that I can zip tar and burn to DVDs for backup and the data on the Samba share that I can backup regularly. Now if I can benefit from the example and get more professional about the greater linux machines in the home.

Really about rPath (3, Informative)

rowama (907743) | more than 6 years ago | (#19967963)

In case your interested, the article is really a review of rPath, a virtual appliance builder based on a custom tailored gnu/linux...

PHP 6 (3, Informative)

gbjbaanb (229885) | more than 6 years ago | (#19967989)

I read somewhere (possibly on the PHP bug system) that they were considering scrapping most fo the security features we've all grown the .. well, hate really, and replace them all with a virtualisation system. I did think at the time that the virtualisation system they'd implement to keep PHP-based vhosts separate and secure would be to run apache in many virtual OSes.

I suppose jailing applications is a well-known way of securing them, this really just improves on that, but with much more overhead. I wonder if anyone is thinking about providing "lightweight" virtualisation for applications instead of the whole OS?

Re:PHP 6 (1)

fjf33 (890896) | more than 6 years ago | (#19968339)

The OLPC is looking at that. Actually that is 'almost' their security frame work.

Re:PHP 6 (1)

Yetihehe (971185) | more than 6 years ago | (#19968713)

It's already done. It's called Operating system.

Re: 'lightweight' virtualisation (1)

Herve5 (879674) | more than 6 years ago | (#19969523)

Wine does this but from an emulation point of view, it's not virtualisation I think...

Re:PHP 6 (1)

wazoox (1129681) | more than 6 years ago | (#19969797)

Yeah, PHP is so blatantly insecure by design that it's probably broken beyond any hope of repair, and should be jailed.

It's all about (4, Insightful)

suv4x4 (956391) | more than 6 years ago | (#19968033)

It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,'

Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.

Hosting services use virtualization to offer 10-20 virtual server per one physical machine, I and many people I know use virtual machines to test many configurations we can't afford to have separate physical machines for.

So even though it's also about "containerization" (is "isolation" a bad word all of a sudden?), it's not ALL about it.

Re:It's all about (1)

amccaf1 (813772) | more than 6 years ago | (#19969195)

Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.

Ha! Very true. I once had a philosophy professor tell us that you could make any statement seem apparently true simply by prefacing it with: "But it turns out that..."

It's all about context (1)

spun (1352) | more than 6 years ago | (#19969281)

I suppose you think when someone claims to be able to eat a horse, they actually have the capacity to devour an entire equine. Relax, it's a figure of speech. [wikipedia.org]

Fuck interop news (0)

Anonymous Coward | more than 6 years ago | (#19968047)

Why are we reading stuff from a site like that? If they're not already shilling, they will be as soon as MS has it's hypervisor ready.

If I wanted containers, I'd be using Solaris, Jails or chroot.

VMs are overkill for "containerization" (3, Informative)

assantisz (881107) | more than 6 years ago | (#19968051)

Solaris has Zones [sun.com] for that exact purpose. Lguest [ozlabs.org] , I believe, offers something similar for Linux.

Linux-VServer for "containerization" (0)

Anonymous Coward | more than 6 years ago | (#19968985)

Linux-Vserver as well. It's even used in the OLPC.
http://en.wikipedia.org/wiki/Linux-VServer [wikipedia.org]

Virtualizing a system can be cheap if the correct virtual machine is chosen. For instance, Linux-VServer (http://linux-vserver.org/Overview) is a very cheap virtual machine that can be easily used to split a linux system into several separated security containers, each one running an independent application/service. It uses Copy-on-Write to share the same system files until one of the containers modifies the file. Only then the file is duplicated on disk, and even so only the modified blocks, so it is very cheap on resources.

This paper has an interesting description of Linux-VServer:
Linux-VServer - Resource Efficient OS-Level Virtualization - https://ols2006.108.redhat.com/2007/Reprints/potzl -Reprint.pdf [redhat.com]
"Linux-VServer is a lightweight virtualization system
used to create many independent containers under a
common Linux kernel. To applications and the user of a
Linux-VServer based system, such a container appears
just like a separate host.
The Linux-Vserver approach to kernel subsystem containerization
is based on the concept of context isolation.
The kernel is modified to isolate a container into
a separate, logical execution context such that it cannot
see or impact processes, files, network traffic, global
IPC/SHM, etc., belonging to another container."
"While a typical Linux distribution install will
consume about 500MB of disk space, our experience is
that [with copy-on-write file system] the incremental disk space required when creating a new container based on the same distribution
is on the order of a few megabytes."

It is so cheap that even the OLPC laptop (not the most powerful computer on Earth...) uses it!
http://www.olpctalks.com/ivan_krsti/ivan_krstic_ta lks.html [olpctalks.com] - interesting bit: "The interesting thing about this by the way is, people are terrified of how are you going to do virtualization on a 466 Mega hertz CPU. With the Linux VServer, the overhead you pay is 32k per task struct, but there is 0% measurable CPU overhead with up to 65,000 virtual machines running . 'll let that sink in for a few seconds. It lets us do full network-stack isolation lets us completely isolate the filesystem, it lets us do this copy and write mode with just a twist on what immutable links do so we can actually do the said at no overhead on the file system. It provides various hooks which we can use, we can add scheduler bios for system services etc. directly on the kernel. There are no policies with this so the mental model is simple. We tell our application developers essentially, the mental model is that you are the only application executing on the machine and you can use a number of the interfaces that we provide to interface with the rest of the system but essentially, you are the only application running on the machine."

Re:VMs are overkill for "containerization" (1)

Sancho (17056) | more than 6 years ago | (#19969641)

You don't get anything similar on Linux, and generally speaking, these alternates can't run proprietary OS.

We've had UML and chroot for quite a while in Linux, but it's equally limited. With virtualization, I can run Windows on my Linux box, which is (to me) where the real use is.

Buzzword alert! (5, Insightful)

drspliff (652992) | more than 6 years ago | (#19968053)

With virtualization like linux vserver, xen, vmware etc. there are two main reasons to why people are using it.

  1) Consolidation
  2) "Containerization" or whatever their calling it today.

The company that I work for are using multiple virtual servers to be able to keep applications separate and be able to migrate them from machine to machine easier which is a common use for vmware (e.g. the appliance trend). So you're trading performance and memory usage for security and robustness/redundancy.

Across maybe 100-200 servers, the number of vservers we have is astonishing (probably around 1200 to 1500, which is a bit of a nightmare to maintain) which are hosting customer applications, when an application starts to use more resources the vserver is moved over to a machine with less servers on it, and gradually to it's own server, which in the long run saves money & downtime.

The other major industry using them is the hosting industry, allowing customers a greater amount of personalization rather than the one-size-fits-all cpanel hosting companies. This is the real industry where consolodation has increased, biting into the hardware markets possible sales because thousands of customers are now leasing shared resources, instead of leasing actual hardware.

Either way, the number of new machines (virtual) machines and ip addresses, all managed by different people is becoming a management nightmare. Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).

Re:Buzzword alert! (0)

Anonymous Coward | more than 6 years ago | (#19968163)

Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).

Shared hosting is fine for static html, it's a security nightmare for modern web apps.

For every inexperienced or negligent VPS admin, there is one who is more experienced or diligent than typical hosting company employees. The issue for VPS providers then is to effectively segregate the inexperienced and the idiots.

Re:Buzzword alert! (1)

GiMP (10923) | more than 6 years ago | (#19968945)

Shared hosting is fine for static html, it's a security nightmare for modern web apps.


Though perhaps rare, there are providers that are very keen on security on shared hosts. I do agree though that there are likely many companies for which this is not true. It is a shame, though, that the majority of bad apples spoils it for the few good ones ;-)

For every inexperienced or negligent VPS admin, there is one who is more experienced or diligent than typical hosting company employees. The issue for VPS providers then is to effectively segregate the inexperienced and the idiots.


Good switching policies can keep things in order and make sure that VPS owners don't step on each other's toes. Offering different pricing tiers can help as well. It often seems that the big enterprise places can be the toughest to deal with, but they also tend to pay more. Not that their staff isn't brilliant, they can be, but they're just doing their jobs. On the other hand, the passionate hobbiest will tend to put time into doing things right.

Re:Buzzword alert! (1)

drspliff (652992) | more than 6 years ago | (#19969439)

With well chosen resource limits you don't care about vservers stepping on each others toes, what I'm worried about is people running these machines and not patching them or taking any sort of approach to security other than: "It's running Linux, it must be secure".

Now, how long do you expect it to take for them to realize their VPS has been compromised by spammers/hackers/scriptkiddies etc.? Probably much longer than the hosting company because their actively looking out for these things.

Virtualization can't protect from the OS (1)

grahamtriggs (572707) | more than 6 years ago | (#19968063)


What do you run inside a virtual machine - an OS!!

What do you run the virtual machine on - an OS!!

So, any application now has to withstand two OSes, not just one. Isolation can be an important part of virtualization, but it's about isolating applications from each other, not from the OS.

Re:Virtualization can't protect from the OS (2, Informative)

GiMP (10923) | more than 6 years ago | (#19968243)

> What do you run the virtual machine on - an OS!!

Unless you're running Xen, unless you consider Xen an OS. But this brings us back to the question, "what is an OS?"

Xen is a kernel for managing virtualized guests, it sits at Ring-0 where traditional OS normally resides. Xen requires that a single guest machine is setup to be booted by default, which will receive special priviledges for purposes of managing Xen. This special guest is called the "dom0", but is for all other intents and purposes -- just another virtual machine.

Very fishy and intriguing... (4, Insightful)

jkrise (535370) | more than 6 years ago | (#19968079)

From the referenced article:

why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn't? There has got to be a little head scratching going on over in Framingham just now.
As I replied to an earlier thread on the Linux kernel being updated with 3 VMs, this sounds very fishy and intriguing. Virtualisation is simply a technique of emulating the hardware in software - memory, registers, interrupts, instruction sets etc. If VMs will only emulate standard instructions and functions, the the Intel processors will be useless as a platform for reliable DRM or Trustworthy Computing purposes, where the hardware mfr. controls the chip - not the customer or software developer. If the virtualisation vendor is also secretive and opaque about his software, that is ideal for Intel because they will now be able to re-implement the secretive features in the VM engines.

The obvious explanation for Barrett's investment (which will net Intel a measly 2.5% of VMware's shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware.
True virtualisation will cause the opposite effect - people will buy less hardware. It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources. And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.

That can be a chilling thought to companies like Intel, Microsoft or Oracle. Also, the carefully woven concoluted DRM and TCPA architectures that consume gazillions of instructions and slow down performance to a crawl... will simply be impossible if the Virtualisation layer simply ignores these functions in the hardware. Which is why I felt it very strange for the Linux Kernel team to get involved in porting these VMs in order to allow Vista to run as a guest OS. It shouldn't have been a priority item [slashdot.org] for the kernel team at all, IMO.

Re:Very fishy and intriguing... (1)

MichaelSmith (789609) | more than 6 years ago | (#19968363)

True virtualisation will cause the opposite effect - people will buy less hardware.

But every desktop user is going to have a CPU in their machine and the number of CPU's in the big server farms isn't going to change much because they pile on capacity to suit the application. Odd sites like the one I work at will use vmware where they have a requirement for a calendar server running linux 2.2 (I am not making this up) and don't want to waste a box on it. Fair enough but that not a big market to lose.

Re:Very fishy and intriguing... (1)

GiMP (10923) | more than 6 years ago | (#19968503)

True virtualisation will cause the opposite effect - people will buy less hardware.


Perhaps, though for myself, this is untrue. I run a hosting provider. Back in the day, we simply needed a few large hosting machines and that was sufficient -- providers could pile accounts onto machines. Even medium-sized companies could get by with less than 10 shared-hosting servers.

However, that has changed with VPS... We can only fit a few customers onto each machine. The more customers we have, the more virtual machines we have, the more resources we require. However, you're right about one thing - we will be buying less hardware. Advances in multi-core processors will mean that we will be needing less space... for now.

Currently my company could upgrade 20 servers from single-core to 8-core, plus load our systems with 32 GB of ram for much less than it would cost us to buy 160 single-core machines. Our savings would be not necessarily to Intel/AMD (we would pay a bit more) but in the amenities: kvm units, kvm cables, switched power distribution units, air-conditioning units, generators, UPS units, power, and staffing (someone has to put that stuff together!)

Unfortunately, I'm afraid that we're not gonna see 16-core machines for some time on x86, for any reasonable price, though it might be possible already today. I wouldn't mind seeing a quad 4-core x86 processor system with 64GB of ram.

Re:Very fishy and intriguing... (1)

afidel (530433) | more than 6 years ago | (#19968987)

You should see 16 core machines with 64GB for a "reasonable" price by fall. The HP DL585G2 will be upgradable to AMD Barcelona. Estimated cost of a 4 way quad machine with 64GB of ram is about $30K by my estimates, that's a 20% premium over a similar machine with near top of the line quad duals today. I know IBM and Dell both have four socket machines that are prequalified for Barcelona upgrades as well.

Obvious and redundent ? (2, Informative)

ls671 (1122017) | more than 6 years ago | (#19968099)

This is kind of obvious, I used to use more machines for security reasons, now I use less machines but they are more powerful. When you do server consolidation, it implies that applications used to run on different hardware for security and stability reason will now be running on the same hardware within different VMs. So how can they say "protect applications from the vagaries of the operating environments" is opposed to "consolidating hardware box".

"Consolidating hardware boxes" implies "protect applications from the vagaries of the operating environments" you just do that with less machines.

I use virtualization because it leaves me with less physical servers to manage, "protect applications from the vagaries of the operating environments" was already done before virtualization. So, virtualization doesn't help me "protect applications from the vagaries of the operating environments", it helps me because I have less servers to manage.

Re:Obvious and redundent ? (1)

DaveCar (189300) | more than 6 years ago | (#19969167)


I think it like when you have an application that is certified with a particular OS configuration and set of patches, etc., and another application which would require a conflicting setup. You can run all your applications in a known good setup and not worry about updates on one application (and the OS dependancies which it drags in) affecting another. You could freeze your package manager at a certain configuration for an application so random OS updates don't go breaking things. Those kind of vagaries.

Containerization is a stupid word, I won't use it! (0)

Anonymous Coward | more than 6 years ago | (#19968117)

What's wrong with 'compartmentalized', 'compartmentalization', and 'compartmental'? I think most people understand what they mean. And they sound less ghey too.

Node Locking (4, Interesting)

Pvt_Ryan (1102363) | more than 6 years ago | (#19968167)

I use vmware servers for software that is node locked.. Node locked software is usually done by a machines MAC address, I find that using VMs reduces downtime in the event of either host or client failing. In the case of the host if we can recover the VM we just copy it to another host and run it. In the case of the client dying the great thing is I just create a new VM and change its mac address to match the dead one then reinstall my licence files, saving me from having to reregister all of the licences to the "new" machine.. Hardware consoladation also plays a large part of my use of VMs, but the main reason is recoverability so much so that all my DCs are on VMs so if their host dies (hardware other than HDD) then i can either pull the disks and put them in another machine, or if my replication has succeeded more recently then I just start my backup copy of the DC and let it update from the domain. Total downtime is about 15min tops.

Who decides most? (1)

ancientt (569920) | more than 6 years ago | (#19968193)

Is there actually a metric of why companies are turning to virtualization somewhere? We are doing it for stability of applications to a very small degree, but also for development ease, backup ease and also for a big part to consolidate and use hardware more efficiently. What about you, why are you considering/using/investigating virtualization?

Most important? (1, Insightful)

Anonymous Coward | more than 6 years ago | (#19968211)

the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on

Most important means different things to different people.

In the real world, to run a reasonably reliable application requires a modern rackmount server with remote out-of-band management, redundant power supplies and RAID. The most common failure modes for computers are hard disk and power supply failures, and this protects you from both. Remote management lets you control & reboot the machine from offsite.

These kinds of servers are available off the shelf from any major vendor (Dell, HP, IBM, etc) and will run you $2000 or so. Given the speed of computers today, that server will run most apps really, really fast. In fact, many apps will rarely go above 10% utilization (you do monitor your servers with SNMP, right?).

So, to get a reliable server with next-day onsite parts replacement, you had to buy far more server than you need. Many (most?) data centers are full of servers like this.

For one software project I'm working on, the vendor recommends 5 servers: one for oracle, two for crystal reports, and two application servers. The vendor recommends hardware costing $40,000. This is for a custom software app that will have 5 users. Yes, 5 users, and it's not a complex app that demands a lot of performance. Having talked to other customers, utilization rarely goes above 3%. Quite a waste, even though the total project cost is $200,000.

Hardware consolidation with VMware can lead to very big savings in hardware, colocation, power, cooling, and admin costs.

And if you get the Vmotion software from VMware, you can move a running virtual machince from one server to another, while it is running, without skipping a beat. That is very, very useful. Need to take your real server down for maintenance? Move the virtual machines to another server. Need to do your end-of-month reconciliation? Move it from the slow backup server to the big fast number cruncher.

Makes sense to me (3, Informative)

jimicus (737525) | more than 6 years ago | (#19968227)

I run a whole bunch of virtual servers and that's exactly what I'm doing.

It's fantastically handy to be able to install and configure a service in the knowledge that no matter how screwed up the application (or, for that matter, how badly I screw it up), it's much harder for that application to mess up other services on the same host - or, for that matter, for existing services to mess up the application I've just set up.

Add to that - anyone who says "Unix never needs to be rebooted" has never dealt with the "quality" of code you often see today. The OS is fine, it's just that the application is quite capable of rendering the host so thoroughly wedged that it's not possible to get any app to respond, it's not possible to SSH in, it's not even possible to get a terminal on the console. But yeah, the OS itself is still running fine apparently, so there's no need to reboot it.

This way I can reboot virtual servers which run one or two services rather than physical servers which run a dozen or more services.

Granted, I could always run Solaris or AIX rather than Linux, but then I'll be replacing a set of known irritations with a new set of mostly unknown irritations, all with the added benefit that so much Unix software never actually gets tested on anything other than Linux these days that I could well find myself with just as many issues.

/Container/ization? Bad, bad lingo. (1)

3278 (1011735) | more than 6 years ago | (#19968257)

Whatever was wrong with the vastly less unpleasant term "compartmentalization," which is already, you know, a word?

Application Deployment (1)

oglueck (235089) | more than 6 years ago | (#19968275)

So in the future we will not release rpm packages and setup files but VM images to our customers? Ok, why not. It could ease deployment of highly customizable enterprise software. So you basically deploy all the OS config with it. Sounds cool. No more telling the sysadmin to open ports, create mount points, set permissions, install init scripts, update this and that library, etc.

Let Me Be the First to say "Duh!" (4, Insightful)

Thumper_SVX (239525) | more than 6 years ago | (#19968393)

Well, yes and no.

As I keep telling people when I work with virtualization, it does not necessarily lead to server consolidation in the logical sense (as in instances of servers), rather it tends to lead to server propogation. This is probably expected; generally I/O will be lower for a virtual machine than for a physical machine, thus requiring the addition of another node for load balancing in certain circumstances. However, this is not always the case.

Virtualization DOES help lead to BOX consolidation; as in it helps reduce the physical server footprint in a datacenter.

Let me give you my viewpoint on this; generally virtualization is leveraged as a tool to consolidate old servers to bigger physical boxes. Generally, these old servers (out of warranty, breaking/dying and so on) have lower I/O requirements anyway so often see a speed boost going to the new hardware... or at the very least performance remains consistent. However, where new applications are being put on virtual platforms, quite often the requirements of the application cause propogation of servers because of the I/O constraints. This is generally a good thing as it does encourage the developers to write "enterprise ready" applications that can be load balanced instead of focusing on stand-alone boxes with loads of I/O or CPU requirements. This is good for people like me as it provides a layer of redundancy and scalability that otherwise wouldn't be there.

However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.

Generally in a Wintel shop, more newer applications get implemented in most companies these days. This is particularly true since most older applications have been or need to be upgraded to support newer operating systems (2003 and the upcoming 2008). This means that the net effect of all I've mentioned is an increase in server instances even while the footprint decreases.

"Containerization" (yuck!) is not new by the way. This is just someone's way of trying to "own" application isolation and sandboxing. People have done that for years, but I definitely see more of it now that throwing up a new virtual machine is seen as a much lower "cost" than throwing up a new physical box. The reality of this is that virtualization is VERY good for companies like Microsoft who sell based on the instances of servers. It doesn't matter if it's VMWare or some other solution; licensing becomes a cash cow rapidly in a virtualized environment.

Where I work we've seen about a 15% net server propogation in the process of migrating systems so far. Generally, low-load stuff like web servers virtualize very well, while I/O intensive stuff like SQL does not. However, a load-balanced cluster pair of virtual machines on different hardware running SQL can outperform SQL running on the same host hardware as a single intstance... this means that architecture changes are required, and more software licenses are needed, but the side effect is a more redundant, reliable and scalable infrastructure... and this is definitely a good thing.

I am a big believer in virtualization; it's somewhat harking back to the mainframe days, but this isn't a bad thing either. The hardware vendors are starting to pump out some truly kick-ass "iron" that can support the massive I/O that VM's need to be truly "enterprise ready". I am happy to say that I've been on the leading edge of this for several years, and I plan to stay on it.

Horrible word (1)

Nefarious Wheel (628136) | more than 6 years ago | (#19968419)

I prefer "encapsulation" myself

Re:Horrible word (1)

greedyturtle (968401) | more than 6 years ago | (#19969159)

Hear Hear

Re:Horrible word (1)

TeknoHog (164938) | more than 6 years ago | (#19969809)

You'd think that the noun pertaining to the verb "contain" would be "containment". But that would just be too easy. Since the software is running inside a container, then obviously the buzzword must include the whole of "container".

VM clustering also allows more redundancy (0)

Anonymous Coward | more than 6 years ago | (#19968453)

Virtualization also allows less important individual OS instances to be made highly available. In some cases there are several tier-2 applications, which we would like to have redundancy for, but cannot afford a second physical box for each. However, we can justify two clustered VM nodes, which will host all of those smaller applications, each on their individual OS, for containerization, and gain the reliability that makes us sleep a little easier at night.

Completely wrong (1)

csoto (220540) | more than 6 years ago | (#19968731)

The reason we run Vi3 is so that we can deploy servers on demand. There's no need to prep hardware. You just right-click and deploy. And, yes, the initial impetus was to consolidate from about 20 hardware servers down to two. We now run about 40 virtual servers on 4 octo-core servers. Consolidation is definitely at work here. "Containerization" is a stupid word, as it's entirely possible through non-virtual deployment (100% probable, in fact). Virtualization is about flexibility in stack deployhment, but mostly serves to provide more stacks per core than is possible in similarly priced hardware.

specialized Black box systems (1)

alch (30445) | more than 6 years ago | (#19968741)

The trend that I expect to happen is to start deploying isolated "Black Boxes". For example - instead of installing redhat and deploying our software on top of it - just ship an already configured VM. I know there are license issues to work out - but in the end, just deploying specialized VM's may be the best case.

Nothing new here... -or- history repeats itself (3, Informative)

cwills (200262) | more than 6 years ago | (#19968967)

Since the late 60's IBM's mainframe VM operating system has been available. It too went through the same phases that is happening now with VMWare, xen, etc. Initially VM was used for hosting multiple guest systems (a good history -> VM and the VM community, past present, and future [princeton.edu] - pdf warning), but quickly a small project (Cambridge Monitoring System - CMS) became an integral part of VM. CP provided the virtualization and CMS provided a simple single user operating system platform.

Within a VM system, one will now find three types of systems running in the virtual machines.

  1. Guest systems, such as Linux, z/OS, z/VSE, or even z/VM
  2. General users using CMS in a PC like environment (sorry no GUI's, and yes there are arcane references to card punches, readers, etc. -- but question -- why does linux still have TTYs?). In the heyday before PC's, CMS provided an excellent end user environment for development, as well as a general computing platform.
  3. And finally Service Virtual Machines (SVMs).

It is these Service Virtual Machines that equate to the topic of the original post. A SVM usually provides one specific function, and while there may be interdependence between SVMs (for example the TCPIP SVM that provides the TCP/IP stack and each of the individual TCP/IP services), they are pretty much isolated from each other. A failure in a single SVM, while disruptive, usually doesn't impact the whole system.

One of the first SVM's was the Remote Spooling Communication Subsystem (or RSCS). This service allowed two VM systems to be linked together via some sort of communication link -- think UUCP.

The power of SVM's is in the synergy between the Hypervisor system, and a light weight platform for implementing services. The light weight platform itself doesn't provide much in terms of services. There is no TCP/IP stack, no "log in" facility (only relying on the base virtual machine login console), and maybe not even any paging memory (letting the base VM system manage a huge address space). Instead a light weight platform will provide a robust file system, memory management, and task/program management. In IBM's z/VM product, CMS is an example of a light weight platform. The Group Control System (GCS) is another example (GCS was initially introduced to provide a platform to support VTAM - which was ported from MVS).

Part of the synergy between between the Hypervisor and the SVMs is that the Hypervisor needs to provide a fast, low overhead intra-virtual machine communication path that is not built upon the TCP/IP stack. In otherwords the communication between two virtual machines should not require that each virtual machine contain it's own TCP/IP stack with it's own IP address. Think more along the lines of using the IPC or PIPE model between the SVMs.

Since the SVM itself is not a full suite of services, maintenance and administration is done via meta-administration, in otherwords you maintain the SVM service from outside the SVM itself. There is no need to "log into" the SVM to make changes. Instead of the SVM providing a sys-log facility, a common sys-log facility is shared among all the SVM's. Instead of each SVM doing paging, simply define the virtual machine size to meet the storage requirements of the application, and let the Hypervisor manage the real storage and paging.

Maybe a good analogy would be taking a Linux kernel and implementing a service via using the init= parameter in the kernel to invoke a simple set up (mounting the disks) and running just the code needed to perform the service. Communication for other services would be provided via hypervisor PIPEs between the different SVM's. So one would have a TCP/IP SVM that provides the TCP/IP network stack to the outside world. A web server SVM that provides just the HTTP protocol and base set of applications, using a hypervisor PIPE to talk to the TCP/IP stack. Within the web server SVM, would use hypervisor PIPEs to talk to the individual application SVMs.

Oh cool (0)

Anonymous Coward | more than 6 years ago | (#19969123)

So the best way to abstract the application interface to the machine is to put a costly virtual machine around it. Thats awesome. I always thought that user-space options like Java, CLR, etc did a pretty good job at that at a very low cost, but I guess it just makes a lot more sense to duplicate the ENTIRE OPERATING SYSTEM for each VM. 32 meg to run the java vm was just too little memory- you need to boot up a heavyweight OS and consume 100 meg or so FOR EACH VM. That makes a whole lot of sense.

CPU virtualization is an interesting topic, but people implementing it are really stretching to find reasons because there are very few good reasons for it. It solves a lot of problems that aren't really that high a priority to solve and does so at a premium.

Smaller OS/Specific OS? (1)

greedyturtle (968401) | more than 6 years ago | (#19969291)

When the buzzword(s) of the day has been 'cross-platform' if we use Virtual Machines to encapsulate an application within it's own OS then the whole convenience of cross-platform apps goes out along with the bathwater. So will this give rise to the tailored OS, which is packed up alongside the application? I guess it would make it a whole lot easier on devs if they don't have to bother testing in anything more than one exact environment. (And I do mean exact - installing another unsupported app within the tailored OS breaks your EULA and Support Contract.) I suppose the snake would eventually eat it's tail with a base operating system that launched the child OS when you ran the application and gave it a seamless window interface. The real question here is how far it will go - and how many core's you'll need just to run a desktop pc...

protecting their market share (1)

Mike_ya (911105) | more than 6 years ago | (#19969525)

The obvious explanation for Barrett's investment (which will net Intel a measly 2.5% of VMware's shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware.

No the obvious explanation is wrong. The percentage of hardware bought that will run virtual servers will continue to increase. Intel is protecting and trying to expand their market share. 'Here look at us, we make virtualization better.'

Companies will buy less servers then they would if virtualization did not exist. I know we are.

A better word (1)

sacrilicious (316896) | more than 6 years ago | (#19969821)

It's all about 'containerization,' to employ a really ugly but useful word

How about just "containment". That way, rampant verbification won't overrunerrize things.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...