×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linus Thinks Virtualization Is 'Evil'

Soulskill posted more than 2 years ago | from the know-who-else-used-virtualization? dept.

Virtualization 330

Front page first-timer crdotson writes "Linus said in an interview that he thinks virtualization is 'evil' because he prefers to deal with the real hardware. Hardware virtualization allows for better barriers between systems by running multiple OSes on the same hardware, but OS-level virtualization allows similar barriers without a hypervisor between the kernel and the hardware. Should we expect more focus on OS-level virtualization such as Linux-VServer, OpenVZ, and LXC?"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

330 comments

Some might argue (2, Insightful)

Anonymous Coward | more than 2 years ago | (#37147058)

That your OS being tied to a particular piece of hardware without a ton of effort is also "evil." Migration is one of the best things ever.

Re:Some might argue (1)

h4rr4r (612664) | more than 2 years ago | (#37147076)

Sounds like your OS needs fixed. I have migrated linux boxes with dd.

Re:Some might argue (1)

FaxeTheCat (1394763) | more than 2 years ago | (#37147248)

While they are running?

Re:Some might argue (1)

kwark (512736) | more than 2 years ago | (#37147366)

If it's running just use tar/rsync.

Re:Some might argue (2)

vux984 (928602) | more than 2 years ago | (#37147412)

I think you missed the point.

You can migrate a server in virtualization to new host hardware without shutting it down.

This is not the same as "copy, shutdown the old, bring up the new". This is live migration without a shutdown.

Re:Some might argue (2)

teftin (948153) | more than 2 years ago | (#37147626)

Which smells badly designed service. It being running should be tied to any specific hardware/vm/os running.

Re:Some might argue (3, Insightful)

vux984 (928602) | more than 2 years ago | (#37147968)

Which smells badly designed service. It being running should be tied to any specific hardware/vm/os running.

As yes...the old all software ever written is just badly designed argument. We just need to fix all software ever written and then we won't need OS virtualization.

Re:Some might argue (2)

MightyMartian (840721) | more than 2 years ago | (#37147252)

As have I, and ended up going through a real pain in the ass with udev. Not as bad as migrating Windows, of course, at least Linux almost always boots (in the old days when I used Slackware I used to compile a braindead generic kernel that would pretty much boot on anything and add it to the LILO menu), but still, depending on the hardware, it can be a hastle.

With virtualization, as long as I stick to a particular virtualization system (generally KVM), so far as the guest OS is concerned, it doesn't know the difference. It sees the same virtualized hardware and life is good. That's what I want. I want to be able to move OSs, regardless of how easy or difficult it may be to migrate any particular OS, without the hastle.

Re:Some might argue (1)

b0bby (201198) | more than 2 years ago | (#37147272)

Sounds like your OS needs fixed. I have migrated linux boxes with dd.

But did you do that while there were live users on the system?

Re:Some might argue (0)

Anonymous Coward | more than 2 years ago | (#37147322)

Yep.

Re:Some might argue (1)

Anonymous Coward | more than 2 years ago | (#37147282)

You don't want to dd every system on a box just because you have to reboot. We just had a hardware error that came up on one of our ESX hosts and we needed to reboot to clear it. It took just one click to enter Maintenance Mode and then another to power down the host. All of the guests migrated without any other intervention. Powered the system back online, exited Maintenance Mode, and all the guests flocked back to the host. Very simple and elegant and no users were impacted when the host was restarted.

Re:Some might argue (1)

turbidostato (878842) | more than 2 years ago | (#37147398)

"Sounds like your OS needs fixed. I have migrated linux boxes with dd."

In 2 to 10 seconds? Live? Without people even losing their current open sessions?

I really don't think so.

Re:Some might argue (0)

Anonymous Coward | more than 2 years ago | (#37147454)

I think he means that he used dd to simply copy a disk to another computer and it worked without drivers or authentication from microsoft......

You're talking about live migration....He reeks of a 'kiddie and doesn't understand you at all.

Re:Some might argue (0)

h4rr4r (612664) | more than 2 years ago | (#37147522)

A kiddie, have not been that in probably longer than this AC has been alive. To me migration means moving to other hardware permanently. Vmotion is vmotion. I use that all the time. Still does not mean that local hardware and dd don't have their place.

Re:Some might argue (1)

micsaund (12591) | more than 2 years ago | (#37147428)

He was 99.99% likely referring to migrating a live VM to another host, not imaging your desktop to another hard drive. In the enterprise, downtime is often not well tolerated (not something I agree 100% but whatever) and live migration (aka VMware VMotion for example) enables you to take a virtualization host and vacate all of the *still running* VMs off to another host when you need to do maintenance or whatever else to the physical hardware. In the case of VMotion, there is a loss of about one ping on the network while the VM execution is cut-over to the new host, but otherwise, all network connections/etc. remain persistent.

It's very handy stuff, but not something someone who's only familiar with home user or individually installed boxes may be familiar with. It's definitely not just using 'dd'.

Re:Some might argue (1)

h4rr4r (612664) | more than 2 years ago | (#37147460)

Actually I run a vmware setup at work.

Re:Some might argue (0)

Anonymous Coward | more than 2 years ago | (#37147634)

well if you run a vmware setup at work and migrate boxes with dd, i'd like to talk to your manager.

Re:Some might argue (0)

h4rr4r (612664) | more than 2 years ago | (#37147688)

Not those boxes you tard.

Somethings like stuff that is IO limited belong on real hardware.

Re:Some might argue (0)

Anonymous Coward | more than 2 years ago | (#37148084)

Wise man say: "when you find yourself in hole, stop digging."

Re:Some might argue (1)

bigredradio (631970) | more than 2 years ago | (#37147518)

The biggest problem with that is trying to migrate to dissimilar hardware. Changes to the storage adapter or disk size (smaller) make for real problems.

<shameless plug>Storix [storix.com] is good for migrating to different hardware.</shameless plug>

Re:Some might argue (1)

alphatel (1450715) | more than 2 years ago | (#37147140)

Universe: Replace hardware, rebuild server + much grunting
Multiverse: Replace Hardware, go to sleep.

Re:Some might argue (2)

sjames (1099) | more than 2 years ago | (#37147772)

Application level migration would be even cooler. Why should migratable resources be arbitrarily glued together into "system images"?

Linus is right (2, Insightful)

mfh (56) | more than 2 years ago | (#37147064)

The shift towards virtualization represents a further shift in control away from each person towards a reliance on the honest of others.

Re:Linus is right (0)

Anonymous Coward | more than 2 years ago | (#37147130)

If the layer below is as transparent as the higher layers, how is that the case? Virtualization is just a layer of abstraction.

No. (3, Informative)

MrEricSir (398214) | more than 2 years ago | (#37147132)

Cloud computing != virtualization

Re:No. (0)

Anonymous Coward | more than 2 years ago | (#37147816)

Duh, everybody knows that the Cloud computing == (the World Wide Web + shameless marketing and pricing schemes)

Re:Linus is right (2)

0racle (667029) | more than 2 years ago | (#37147170)

If I run Linux as a host and FreeBSD and Windows in kvm or VirtualBox, to whom have I given up my control too?

Re:Linus is right (1)

arth1 (260657) | more than 2 years ago | (#37147354)

In the case of VirtualBox, that would be to Oracle.

Can you say with certainty that they will support Windows 9 guests or Linux 3.4, or have you given up the control over upgrades to them?

Re:Linus is right (2)

Baloroth (2370816) | more than 2 years ago | (#37147468)

VirtualBox is GPL (extensions aren't, but those aren't needed for core functionality), so really not much control is handed over at all (if Oracle refuses to offer support, the project can be forked.)

Re:Linus is right (1)

0racle (667029) | more than 2 years ago | (#37147648)

Can you say for a certainty that your current hardware will support Windows 9 or Linux 3.4? Certainly now. Are you CERTAIN unreleased future OS's will support your current hardware?

No you aren't, you can't be. Oh they probably will, but then again, VirtualBox and kvm will probably support the next few releases of the major OS's as well. So again, what have I given up, other than 2 keyboards and at least one mouse of course? There are many tools out there to convert virtual disks from one format to another so moving data to another VM package (if you just had to keep it local to that VM for some reason) is easy, and much cheaper than buying adapters for hardware interfaces.

Re:Linus is right (2)

arth1 (260657) | more than 2 years ago | (#37147866)

Can you say for a certainty that your current hardware will support Windows 9 or Linux 3.4? Certainly now. Are you CERTAIN unreleased future OS's will support your current hardware?

No, but I am certain that I can buy hardware that is compatible.
With a VM, there are no such guarantees, because there's a lack of purveyors of virtual hardware. It's pretty much only the virtual machine vendors, so yeah, you have to trust them to upgrade whenever necessary.

There are many tools out there to convert virtual disks from one format to another so moving data to another VM package (if you just had to keep it local to that VM for some reason) is easy, and much cheaper than buying adapters for hardware interfaces.

Switching to another VM isn't always an option either. So far, none of the VM vendors have provided drivers compatible with Gnome Shell, for example. (Not that gnome shell is essential, but it proves the point by example, I should think.)

Re:Linus is right (1)

Jeremi (14640) | more than 2 years ago | (#37147434)

If I run Linux as a host and FreeBSD and Windows in kvm or VirtualBox, to whom have I given up my control too?

The Chinese government, of course.

(ducks)

Re:Linus is right (3, Insightful)

MightyMartian (840721) | more than 2 years ago | (#37147270)

WTF? I've built three production Linux KVM servers now. Other than relying on the KVM team (backed mainly by Redhat), I'm not relying on anybody else. And if Redhat is a problem for you, then you've got bigger issues than the KVM virtualization modules in the kernel.

Re:Linus is right (2)

Anrego (830717) | more than 2 years ago | (#37147302)

If you are talking virtualization in it's relationship to cloud computing, then I agree.

Virtualization on your own hardware though... not much difference from the current state. It's just another (sometimes open source) piece of software we learn to trust (along with all the other software we use).

Re:Linus is right (0)

Anrego (830717) | more than 2 years ago | (#37147320)

* its

Please forgive, it's Friday :(

Re:Linus is right (1)

fnj (64210) | more than 2 years ago | (#37147740)

Forgiveness is the fragrance that the violet sheds on the heel that has crushed it.
- Mark Twain

You are forgiven. Now pick me and put me in some water.

Re:Linus is right (1)

jellomizer (103300) | more than 2 years ago | (#37147686)

I am not following you?

Virtualization is just taking your own Hardware and sharing some resources and partitioning others so you can run Multiple OS's on your hardware...

It is not much more of a jump of putting your faith on the OS who will manage your hardware, and give you an interface level to the hardware.
Virtualization you can think of it as a OS for your OS.

The main value in Virtualization is from an Old computer science process... Keep you hardware running at high utilization doing something productive. As time went on people started to use separate servers running at 5% utilization. Because they needed all their Apps to run separate of other apps. Or groups of the same app running with different configurations. So now we virtualize on the same hardware to say for example to get that CPU 5% utilization to 50% and running 5 VMs sure there is some bloat going and it isn't efficient. But it is more efficient then having 5 servers running at 5% CPU utilization wasting power.

Then you have the more enterprise ready features of a virtualization where say your VM is on a SAN storage then you can swap hardware in real time. Or move from one Server to an other easily and without having to reconfigure your hardware settings for the new PC.

I hope one day Linus or RMS states they like the taste of poo, just to see how many nuts out there will just blindly go along and do it because these guys have an opinion and because they have done or are doing great things they must be always right.

Re:Linus is right (1)

element-o.p. (939033) | more than 2 years ago | (#37147952)

How do you figure? Maybe I'm just not the sharpest tool in the shed (nor have I ever claimed to be) but I really don't see much difference between trusting Intel/AMD/Motorola/etc. to be honest in what they put in the chipset and in trusting Citrix (Xen)/VMWare/KVM/whatever in what they put in the software. If anything, I probably have a better chance of detecting goofiness in FOSS software than I have in a chipset (not that I'm likely to detect anything in either one, but at least I *can* look at the source in a FOSS project if I want).

Re:Linus is right (1)

dissy (172727) | more than 2 years ago | (#37148060)

The shift towards virtualization represents a further shift in control away from each person towards a reliance on the honest of others.

And who exactly are you to tell me what I can and can not do with my own hardware?

If I choose to run zero, one, or a hundred OSs on my virtualized hypervisor on the hardware I purchased, what right do you have to say it is a bad thing?

If anything, you dictating that I shouldn't virtualize my own hardware because it is evil, is what would be taking control away from me. Virtualization gives me control. MUCH more control than the hardware itself can.

And what are you going on about relying on the honesty of others?
How does other peoples honesty even come into play when I choose to use my hardware in one way versus another? Other people aren't even involved, let alone their honesty!

Screws are evil (5, Insightful)

Anonymous Coward | more than 2 years ago | (#37147078)

Because I'm used to working with a hammer.

Linus is not a god, just a guy, with his own prejudices.

Re:Screws are evil (1)

Anonymous Coward | more than 2 years ago | (#37147330)

Still, he's right. Probably not the best person so say it, tho.

Why can't I suspend an entire app, TCP stack state, files, and all?
Why can't I, in that state, dump it, then restore it?
Why can't I migrate said dump, including the IP address (that I would have bound to the app instead of the machine), to another server? Then unsuspend it.
Why can't I do that seamlessly?

Virtualization is an ugly band-aid. But it's a band-aid for real shortcomings in current OSes.

Re:Screws are evil (1)

vux984 (928602) | more than 2 years ago | (#37147508)

Virtualization is an ugly band-aid. But it's a band-aid for real shortcomings in current OSes.

And solving those shortcomings would lead you to separating the OS into 2 layers akin to a hyper visor and OS... and you'd be right back where you started.

And someone would still need to run a *nix server along side a Windows one, or find they need to run two different distros or two different kernel versions... and still need traditional virtualization to do it.

Re:Screws are evil (1)

VGPowerlord (621254) | more than 2 years ago | (#37147544)

Why can't I suspend an entire app, TCP stack state, files, and all?
Why can't I, in that state, dump it, then restore it?
Why can't I migrate said dump, including the IP address (that I would have bound to the app instead of the machine), to another server? Then unsuspend it.
Why can't I do that seamlessly?

Because networking is bound by factors external to the app, not the least of which that IPs are machine bound, not app bound. Or did you forget that a router somewhere needs to know where that IP is routed so it can route packets to you?

Re:Screws are evil (0)

Anonymous Coward | more than 2 years ago | (#37147580)

wooosh much?

Virtualization is a *tool* to let you do a couple of things.

1) hardware consolidation for persnickity apps and scaling from low load to high.
2) jack up an install and put it back to 'good' state fairly quickly.

The first one lets people run '5' boxes in 1.
The second lets you fix things quickly without having to worry about that crap uninstall program someone didnt bother to test.

Thats it. Your example is a shortcoming of virtualization implementation...

Virtualization is expensive in compute ressources (1, Interesting)

drolli (522659) | more than 2 years ago | (#37147080)

but its cheap in human resources since it is the ultimate reuse of code.

Re:Virtualization is expensive in compute ressourc (0)

Anonymous Coward | more than 2 years ago | (#37147152)

You mean ultimate copy paste pattern...

Which is why you end up applying the same patches N times as opposed to one (or two in the case of HA)...

Good for beginners (2, Interesting)

Anonymous Coward | more than 2 years ago | (#37147092)

Virtualization is good for new junior programmers learning how to program firmware, sinceeany low level calls can not really destroy the real hardware, since protection can bee built right in.

It's a crutch, but since we have a generation of programmers who can't do "the hard stuff" becuase "java does it for them", its certaintly good to have around.

"Hardware virtualization"? (1)

smoothnorman (1670542) | more than 2 years ago | (#37147100)

Isn't Hardware _realization_? and/or if the hardware is virtualized then isn't it done with software? not "real hardware"? ...ok, i admit it, i'm lost. someone smarter than me, -you there-, some examples please. (which need not necessarily involve automobiles)

Re:"Hardware virtualization"? (1)

Coren22 (1625475) | more than 2 years ago | (#37147606)

You go out and buy a Dell blade cluster, it contains 16 identical blades with 2 sockets of 6 cores each and 24 GB of RAM. You hook this blade cluster to an Equalogics array for storage. Install ESX on 15 servers, vcenter on 1, now you can install whatever servers you need, and they are entirely fault tolerant.

Oops, there is smoke coming from one of the ESX systems, and it seems to be unresponsive; vCenter detects the failure and moves the virtual machines without downtime. I don't know what crack Linus is smoking, but Linux cannot do this with very many services.

hypervisors are a necessary evil (1)

arkhan_jg (618674) | more than 2 years ago | (#37147110)

VMware makes a hetrogenous environment far, far easier to deal with - we have some 70 odd servers running on 5 physical servers. It makes it much easier to single-task a given server/VM, spread the load without having to invest far more in server hardware, and allows having backup/redundant servers to allow for patching/upgrades of servers with much more minimal effort.

While in-OS virtualization is great if you only require a single OS to do everything; but if you have hetrogenous servers to handle different tasks for different clients - i.e. AD/exchange for the windows users, linux for the webservers and network infrastructure etc, then hypervisors are frankly essential for sysadmin these days.

Re:hypervisors are a necessary evil (1)

h4rr4r (612664) | more than 2 years ago | (#37147142)

Not at all. Put all the linux boxes in openvz or something like it and windows servers can go into their hypervisor until MS smartens up.

Then you just need the linux boxes all on one set of hardware and windows on another. Vmware has other attractions, this is not really one of them.

Re:hypervisors are a necessary evil (2)

BagOBones (574735) | more than 2 years ago | (#37147378)

Accept now you need a Linux Vitalization admin and a Windows Virtualization admin.. You have just doubled the number of "platform specific" gotchas you need to learn, plus unlike having a single VMWare cluster you now have two clusters, probably increasing your hardware cost per vm by needing both platforms to maintain a proper save resource overhead to handle failures.

Re:hypervisors are a necessary evil (0)

h4rr4r (612664) | more than 2 years ago | (#37147490)

Good point.

Any linux admin can probably be taught the "Monkey see, Monkey click" windows stuff though.

Re:hypervisors are a necessary evil (0)

Anonymous Coward | more than 2 years ago | (#37148066)

Which is why my workload is always double that of someone who does only Linux or only Windows. I can work with both. I get paid approximately the same amount of money with more work and more responsibility.

Re:hypervisors are a necessary evil (0)

Anonymous Coward | more than 2 years ago | (#37147588)

Try working in a large company these days.

Every god damned thing has to be a VM, even IO intensive applications such as network discovery tools that ping/ssh/wmi/etc 80,000 servers nonstop. Tell me how good that runs on VMs..... The company I work for is just retarded about this. VMs fail and trip up in all sorts of odd ways. We can't actually snapshot or use any VM features, it's purely for *their* use even though I'm a lead developer and understand the system better than the low-on-the-totem-pole admins.

Please stop giving large companies tools like this. They are far too incompetent to handle them properly. I've manually staged the same identical installation by hand over 40 times when I could have done one and made an image. Their policy is asinine..... Oh and I make six figures as a specialist who now wastes 70% of my time doing basic repeatable tasks on their VMs that I could automate in one day. Oh and the first digit of my salary is not a 1 either. Talk about wasteful.......

I agree (1)

MpVpRb (1423381) | more than 2 years ago | (#37147122)

It's hard enough to get stuff to run reliably when you are dealing with real hardware.

Adding another layer just increases the number of dark corners where bugs can hide.

Re:I agree (1)

Anonymous Coward | more than 2 years ago | (#37147224)

It's hard enough to get stuff to run reliably when you are dealing with real hardware.

Adding another layer just increases the number of dark corners where bugs can hide.

Conversely, a virtualization environment presents a somewhat "neutral" hardware profile to installed OSs. This makes it useful for installing legacy software on new hardware.

Re:I agree (2)

0123456 (636235) | more than 2 years ago | (#37147310)

Conversely, a virtualization environment presents a somewhat "neutral" hardware profile to installed OSs. This makes it useful for installing legacy software on new hardware.

And adds a new load of bugs in the process.

When I was writing PC emulators years ago there were a lot of obscure bugs in the emulated applications when the fake hardware we gave it didn't quite work the same way the real hardware did and there was no way to emulate it precisely.

For example, suppose you have Linux running with an ext4 filesystem that's emulated by a disk file on a real Linux system using an ext4 filesystem on RAID. Do filesystem barriers work?

Re:I agree (0)

Anonymous Coward | more than 2 years ago | (#37147492)

A worse example for me is a number of open source operating systems being developed on qemu before ever touching bare hardware/real bios. These systems are listed as supporting anything back to i586/i686 processors, only due to never having run on 'real' hardware that old they have no workarounds for the gotchas of real errata/bios from that era, and thus will often hang/crash on real hardware from that era. Result of this is that I am unable to use other open source operating systems besides *BSD/linux because the software doesn't support the hardware it's claimed to (Even if it's the same chipset as qemu they're running against, it may not 'in reality' have been set up the same way.)

Re:I agree (1)

sarhjinian (94086) | more than 2 years ago | (#37147562)

Yes, it works. Never mind the filesystem, you'd use a real hypervisor on real hardware with real operating systems designed for real virtualization, with real drivers that all work really, really well.

There's a big, big difference between, say, Xen, ESX or Hyper-V running your enterprise apps, and, say, getting Leisure Suit Larry running on Bochs. It's really, really nice to be able to move a running Linux guest with a few hundred users over to another server without a hiccup.

Heck, even at the workstation level it all works very well. I'm sure that crazy old DOS stuff that bangs quirks of a vintage AdLib won't work, but modern operating systems handle virtualization well. Heck, they're designed for it nowadays.

Yes, you can do this sort of thing on a "proper" OS and with applications that can be safely jailed and that support clustering, but sometimes, in the real world where apps and operating systems and people suck, it's easier, quicker and cheaper to use VMware or whatever.

Re:I agree (1)

0123456 (636235) | more than 2 years ago | (#37147738)

Yes, it works. Never mind the filesystem, you'd use a real hypervisor on real hardware with real operating systems designed for real virtualization, with real drivers that all work really, really well.

In other words, if you don't go to a great deal of trouble to ensure that you're running the right software on the right hardware, you risk corrupting your virtualised disk because the OS you're running in the virtualised server thinks that filesystem barriers work but the kernel it's running on is old enough that LVM doesn't support fileystem barriers or the real ext4 filesystem is mounted with barriers disabled so it doesn't even try to use them.

I disagree. (2)

g00mbasv (2424710) | more than 2 years ago | (#37147234)

I disagree. its a layer that *when properly done* reduces the complexity as the underlying hardware is totally masked, and you have to deal only with known virtual hardware.

Re:I disagree. (1)

afidel (530433) | more than 2 years ago | (#37147642)

Bingo, when we upgraded from Nehalem's to Westmere CPU's we were able to do it without interrupting any running VM, we simply set the EVC mode for the cluster to Xeon Corei7 and we could move VM's back and forth between hosts with either generation of processor. We also move running VM's between storage arrays and even between local and SAN storage. We don't have to mess with storage drivers, network drivers, multipathing software, or any of that junk at the VM level so they are set it and forget it which all but eliminates QA time for moving to new hardware (just have to verify the host is good).

Re:I agree (1)

MightyMartian (840721) | more than 2 years ago | (#37147338)

That's not been my experience. In many respects virtualization simplifies things. Unless you're dealing with paravritualization, generally you're dealing with a much more "dumbed-down" environment; some limited number of emulated NICs, video cards, mass storage controllers, bus controllers and the like, and these are almost inevitably taken from widely-understood hardware. Yes, there could be bugs lying in wait within the virtualized drivers, but then again, it's not like real hardware has any lack of onboard controller bugs and driver bugs.

To put this another way. I've been running the same Server 2003 and Linux servers under the same KVM host server for two years now, and I have had no more downtime then when I had them on real hardware.

Re:I agree (1)

BagOBones (574735) | more than 2 years ago | (#37147418)

Hardware failures in our VMWare environment tend to go almost completely um-noticed by server admins and en users... We have had network cards fail, Host power loss, and ram failures... The VMWare brings the VMs back on line so fast that there is rarely an alarm on the service, instead just something we need to fix on the back-end.

This goes extra for storage where we regularly shift VMs between SANs by different manufacturers using storage VMotion, LIVE.

wrong, OS level Implementation is the problem (5, Insightful)

g00mbasv (2424710) | more than 2 years ago | (#37147128)

The title is a bit on the FUD style. PROPER virtualization is not criticized by Linus, but improper implementation, namely cheap OS-level virtualization wich could lead to lazy shortcuts to patches and features implementation.

Evil is on the other foot (2)

Anonymous Coward | more than 2 years ago | (#37147204)

It's actually Evil to not virtualize, because you waste electricity! It requires additional power for each physical server to run a single OS, plus the airconditioning costs for all those servers. This means your poluting the planet more by not virtualizing!

40+ years of experience (4, Insightful)

stox (131684) | more than 2 years ago | (#37147214)

If you want to see where virtualization is going, check out where VM370 was in 1977 or so. That is about as far as the current virtualization technology has gotten. Bare metal has its place, as does virtualization.

Re:40+ years of experience (0)

Anonymous Coward | more than 2 years ago | (#37147486)

If you want to see where virtualization is going, check out where VM370 was in 1977 or so. That is about as far as the current virtualization technology has gotten. Bare metal has its place, as does virtualization.

um
LPARs on Power
PR/SM system Z

z/VM is much different and still has many applications... z/linux anyone?

It's mostly true (3, Interesting)

Mad Merlin (837387) | more than 2 years ago | (#37147220)

Linus has never been diplomatic, but it's mostly true. A huge amount of virtualization done today involves the same host and guest OS, and in most of those cases, using something slimmer than full blown virtualization would make a whole lot more sense, even if only for the improved performance. One of the problems is familiarity, container type isolation isn't applicable to as many cases, so fewer people are familiar with it. One of the other problems is the perception that full virtualization is more secure (which is probably untrue).

There is however, a large swath of problems that aren't solved well by container type isolation that virtualization does solve well. If you need to simulate different physical systems (with separate IP addresses), that's much easier with virtualization. Likewise if you need very different guest and host OSes, that's not a strong point of container type isolation. Also, if your guest OS is sensitive to hardware changes, virtualization makes a lot of sense. There's more, but you get the idea.

Re:It's mostly true (0)

Anonymous Coward | more than 2 years ago | (#37147698)

I use virtualization for trying out operating systems that flat-out don't support my hardware. I don't feel like learning how to write graphics drivers and a wifi stack for every hobbyist OS I run into, so KVM is a blessing.

Get better informed (1)

The Famous Brett Wat (12688) | more than 2 years ago | (#37148008)

OpenVZ and Linux-VServer support separate IP addresses as very basic functionality. How do you suppose hosting providers create virtual private servers based on them if they don't? OpenVZ also supports private iptables per container, so that you can set up per-container firewalls. The main problem with containers is the staggering amount of ignorance about the subject.

You know what else is evil? (2, Insightful)

Anonymous Coward | more than 2 years ago | (#37147244)

Having to reboot to play video games.

Re:You know what else is evil? (1)

tepples (727027) | more than 2 years ago | (#37147430)

@Anonymous Coward
You don't need to reboot to play video games. You just need a Kazzo or a Retrode so that you can copy your 8- and 16-bit game cartridges into the computer and run them on an emulator.

What about multi-processing and virtual memory? (1)

Anonymous Coward | more than 2 years ago | (#37147264)

I had similar misgivings about virtualization until I realized, it is simply the next step after true pre-emptive multiprocesses each with their own view of virtual memory.

FreeBSD vps "hot migration" (4, Interesting)

Anonymous Coward | more than 2 years ago | (#37147298)

For those of you that look at FreeBSD jails, Linux OpenVZ, etc etc and say "but I want to migrate between servers!!!" there is an example of this being a possibility.

http://www.7he.at/freebsd/vps/

This guy did it with FreeBSD, but the real problem is that he needs funding to continue polishing it before it can ever be implemented into a FreeBSD release. I wish more people knew about this as we'd love to have it at work.

Idiotic, that's what OS's do (3, Interesting)

BlueCoder (223005) | more than 2 years ago | (#37147316)

The whole point of a modern OS is to virtualize the hardware so that each software application can play nice with each other.

The hypervizor is the new ring 0. And it's going to evolve into a microkernel and user mode drivers. It's the new operating system and that what he should be working on if he likes hardware bits. The "Operating Systems" of old are evolving into plug in Operating Environments. It's the future, the revolution, get over it.

Re:Idiotic, that's what OS's do (1)

JamesTRexx (675890) | more than 2 years ago | (#37147558)

That's what I'm waiting for, one core OS at boot (specialized BIOS?) on top of which I can install several other OS' and switch between them with a keyboard shortcut.
It would require a layer on top of graphic hardware first though.

Re:Idiotic, that's what OS's do (0)

evilviper (135110) | more than 2 years ago | (#37147674)

You're way off. Yes, the kernel of any OS is basically a hypervisor, but no, there's NO REASON for that to change. VMWare's approach was never a good one, and it was strictly useful because of brain-dead OSes out there which aren't stable, and don't allow you to basically have your own entire OS running at user-level, while any Unix system can do exactly that. Paravirtualization only takes this a step further, so you have your own entirely seperate kernel as well, but again, except in a few narrow cirumstances, this isn't needed. One kernel can run as many separate userlands as you like, with entirely different programs, libs, etc.

And Paravirtualization isn't the only thing blurring the line of virtualization. Try KVM... The kernel is the hypervisor again.

Frankly, for all Unix virtualization, we need a library that makes people THINK they're running as root within their userland on a multiuser system. Call it para-paravirtualization...

Re:Idiotic, that's what OS's do (4, Insightful)

Jon Stone (1961380) | more than 2 years ago | (#37147820)

Virtualisation is, in many ways, trying to do what the OS should already be doing, namely isolation between processes (though protected memory), providing an abstraction layer for the hardware (though drivers) and allocating resources (through the CPU/IO schedulers).

Unfortunately, a certain OS has been so bad at doing this (historically) that people turn to virtualisation and you end up with a form of inner-platform effect [wikipedia.org] . We have Linux implementing the virtio drivers to interface with the hypervisor which implements real drivers to talk to the real hardware. We have the guest's scheduler trying to manage "virtual CPUs" without any real information about what resources are actually available. We have hypervisors trying [vmware.com] to re-implement copy-on-write [wikipedia.org] for memory pages that the OS already does out-of-the-box.

Virtualisation is used as a "one size fits all" sledgehammer, often where it isn't the appropriate solution.

Isn't that the difference between KVM and XEN? (1)

Britz (170620) | more than 2 years ago | (#37148024)

I thought that was exactly the difference between XEN and KVM. KVM uses the Linux kernel as ring 0, whereas XEN creates it's own 'sort of' kernel as ring 0.

And, I don't think this approach is the best, because Linux and Unix still outperform any other approach by a long shot and have a lot of stability. So I prefer OpenVZ, Linux-Vserver.org and, since it is now the officially sanctioned solution: LXC. On the server side everything is Linux anyways. So why should I virtualize hardware, when I can use the perfectly good Linux kernel, which is very fast and very stable and just virtualize the userland? I get more perfomance AND more stability.

Theo de Raadt agrees (0)

Anonymous Coward | more than 2 years ago | (#37147336)

"x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes."

http://kerneltrap.org/OpenBSD/Virtualization_Security

Re:Theo de Raadt agrees (1)

JamesTRexx (675890) | more than 2 years ago | (#37147694)

He's right if you're just looking at security. Most virtualization is about combining separate servers wasting energy and space onto one server though.

Linus is wrong (1)

Anonymous Coward | more than 2 years ago | (#37147340)

Linus is wrong. Have a good day.

So is he going to buy me a room full of computers? (2)

Kenja (541830) | more than 2 years ago | (#37147350)

If not, then I'm going to stick with my virtual machines.

Linus Torvalds is... (1, Interesting)

vranash (594439) | more than 2 years ago | (#37147384)

... the John Carmack of Open Source *nix Kernels. Seriously, what has he personally done in the past 5 years other than fsck us with first Bitlocker and then Git, a decade long string of incompatible 2.6.x releases, and finally, in order to 'me too' bad judgements by other open source companies, releasing a half baked kernel as 3.0 that might as well have been called 2.7 or 2.8 for all the new features it provides. (That is to say... none?)

Re:Linus Torvalds is... (0)

StuartHankins (1020819) | more than 2 years ago | (#37147512)

Armchair quarterback: I bet if I added up your accomplishments against those of Linux Torvalds, you would be found wanting.

Complaining doesn't solve anything. Get off your duff and get involved in whatever way you can contribute.

linus is gay (0)

Anonymous Coward | more than 2 years ago | (#37147474)

Who the fuck cares what the linux guy things about virtualization, I mean really. So the nerdy looking fuck can't code against the raw hardware, cry me a fucking river. The benefits outweight the gay

Finally (0)

Anonymous Coward | more than 2 years ago | (#37147668)

I have Linux God on my side.

What is the point of running a VM and 100 copies of the same O/S on top of it and running little (and some big) daemons in side these Guest O/S's and claim that you are saving electricity or some such nonsense when you could run all of those Servers on the bare metal + original O/S? Bad programming may make it temping to run stuff in VM's but they won't suddenly become good programs no matter what.

And now on future slashdot... (1)

jellomizer (103300) | more than 2 years ago | (#37147718)

So now in the future of Slashdot, except a bunch of people praising Virtualization we are going to get a bunch of mindless sheep now condemning it.
Much like how RMS got Slashdot to loose its love for Cloud computing.

He's taking the piss. (1)

Jimbookis (517778) | more than 2 years ago | (#37147762)

It sounds to me like you 'Merkins are taking what's said literally, as usual, and not seeing the humourous subtext in what he's saying. It seems to me virtualisation holds no real fascination for Linus but he's not against it either. I think he likes to throw some flamebait around for fun to get the slobbering masses frothing on forums like Slashdot. And you all fell for it like the Nazis you are.

Well, I think... (0)

Anonymous Coward | more than 2 years ago | (#37147902)

Well, I think virtualization is "awesome."

Grandpa Simpson (1)

Capt.DrumkenBum (1173011) | more than 2 years ago | (#37147970)

Did anyone else hear Grandpa Simpson saying "Virtualization is evil I tells ya. EVIL!!!!" in their head while reading that?

Perhaps I really am strange then.

A Hypervizor IS AN OS (1)

TheBrutalTruth (890948) | more than 2 years ago | (#37148004)

What a tool. And VMing it's where it's at. Get over it, or go play with your P1 RHEL box that is slower than my grandpa pooping. _ Disclaimer - I make a living, and a damn good one, implementing storage / virtualization. I won't back down from advocating more efficient use of computing power / resources either. I do it at work, I do it at home (VM templates work great w/ haphazard kids & family). Let's VM Linus!

have the best of both with kvm (0)

Anonymous Coward | more than 2 years ago | (#37148040)

where Linux IS the virtualization hypervisor, not some dubious other kernel (Xen).

KVM is where it's at, baby.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...