×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Godfather of Xen On Why Virtualization Means Everything

samzenpus posted more than 2 years ago | from the more-real-than-real dept.

Open Source 150

coondoggie writes "While conventional wisdom says virtualized environments and public clouds create massive security headaches, the godfather of Xen, Simon Crosb, says virtualization actually holds a key to better security. Isolation — the ability to restrict what computing goes on in a given context — is a fundamental characteristic of virtualization that can be exploited to improve trustworthiness of processes on a physical system even if other processes have been compromised, he says."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

150 comments

OS design fail (5, Insightful)

Animats (122034) | more than 2 years ago | (#37942316)

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Re:OS design fail (-1)

Anonymous Coward | more than 2 years ago | (#37942370)

Wow. Thanks for pointing that out. I bet you think you're insightful and shit too? Go fuck yourself, Bruce.

Re:OS design fail (4, Interesting)

bolthole (122186) | more than 2 years ago | (#37942410)

True.

Plus the minute you start sharing things within a virtual machine
(ie: apache, cgi-type middleware, database all on the same machine), you've just lost all "extra" security from virtualization. You may keep the top level OS "protected", but who cares, you've lost private data from your database, through a hole in apache(or whatever). OOoops....

The problem of security is slightly improved, if you run each thing on separate virtual machines on the same hardware. You should in theory get relatively fast interconnects. If you VM is any good, that is. But you're still losing efficiency, unless you're doing "zones" or something like that.
And it's 3x the headache to manage 3 separate instances of OSs, for what is in effect just one top level system anyway.

Re:OS design fail (1)

Eil (82413) | more than 2 years ago | (#37943868)

The problem of security is slightly improved, if you run each thing on separate virtual machines on the same hardware. You should in theory get relatively fast interconnects. If you VM is any good, that is. But you're still losing efficiency, unless you're doing "zones" or something like that.
And it's 3x the headache to manage 3 separate instances of OSs, for what is in effect just one top level system anyway.

Well, nobody (or at least, nobody sane) does it like that. There is no non-trivial datacenter that virtualizes the different components of their server stack on the same physical machine just because they think it's going to buy them any extra security. They're going to have a web server farm over here, some application server blades over there, a database cluster on the other side of the room, and perhaps a row dedicated to SAN, document storage, backups, and so on.

Virtualization is more typically used as key part of a larger system to rapidly deploy new hosts on demand and take better advantage of the incredible power of today's hardware by partitioning it down into smaller chunks. The only time "security" enters into it is the fact that you always separate hosts based what they do and who should have access to them, which you would do with physical machines anyway.

About the Article (ATA) (0)

Anonymous Coward | more than 2 years ago | (#37943938)

The article is a whole load of marketing BS, he obscures the real truth by telling half truth to feed the marketing machine

its all bs

Re:About the Article (ATA) (1)

TheRaven64 (641858) | more than 2 years ago | (#37945762)

Not surprising. Simon Crosby was the CTO of XenSource before it was purchased. He was in charge of marketing. Kier Fraser and Ian Pratt were the two who were in charge on the technical side.

Re:OS design fail (1)

Nikker (749551) | more than 2 years ago | (#37944584)

Maybe if you are used to setting up development systems but as far as enterprise they are all different machines. You might have both an IIS server as well as an Apache server feeding off of a common database, you don't want a fault in one client to take down more than one system and the list goes on. If you only plan to have 3 clients you might not want to hire a specialist but for such a small VM but as long as you are taking nightly snapshots there is not too much that can go wrong that you can't fix with a basic knowledge of your hosting VM, maybe it's worth sending out your Sys Op out for a training session? It might not be right for every business but it does have it's purpose.

Re:OS design fail (3, Insightful)

White Flame (1074973) | more than 2 years ago | (#37942420)

OSes haven't failed as a whole. The current desktop/server ones just haven't caught up to and rediscovered the proper design principles of the old mainframes.

Re:OS design fail (3, Informative)

betterunixthanunix (980855) | more than 2 years ago | (#37942450)

Funny how virtualization was started on mainframes...

Plenty does (2)

Sycraft-fu (314770) | more than 2 years ago | (#37942976)

Reason is that money isn't a concern there, reliability is. So you can throw tons of technology at making something work well. There's plenty of stuff that mainframes do that we'd love to see on normal computers. The problem is being able to implement it at an acceptable level of performance and at an acceptable cost.

Re:OS design fail (4, Interesting)

TheRaven64 (641858) | more than 2 years ago | (#37945796)

The difference is, mainframes did it properly. The first system to support virtualisation was VM/360. It didn't just support virtualisation, it supported recursive virtualisation. This meant that any VM could contain other VMs, so you could use the same abstraction for isolation at any level. Operating systems provide a very limited form of virtualisation: processes. A userspace process is basically a VM for a paravirtualised architecture. Any time it wants to talk to the hardware, it has to go via the kernel. The problem is, it stops there. A process can't contain other processes which can only contact the kernel via the parent process, so programs end up adding their own ad-hoc isolation mechanisms. Things like the JVM, web browsers, even office apps all need to run untrusted code but have to isolate it without any help from the hardware. Fortunately, modern systems are providing things like capsicum, sandbox, and systrace, so it is now possible to create a child process with very restricted privileges.

Re:OS design fail (1)

Hatta (162192) | more than 2 years ago | (#37942438)

Indeed. And operating systems are moving in that direction with more and more emphasis on sandboxing. Full virtualization is really overkill for privilege separation.

overkill...but necessary (1)

Radical Moderate (563286) | more than 2 years ago | (#37942902)

Among other things, I'm responsible for a cluster of windows terminal servers, which users never fail to find creative ways of breaking. Yes, Windows sucks, but it's necessary to run the software my customers use, so there is no alternative. Virtualization may be overkill in theory, but in reality it may be the only way to keep users from hosing our systems. Would be different if MS knew how to properly design an OS, but if wishes were ponies......

Re:overkill...but necessary (1)

wmbetts (1306001) | more than 2 years ago | (#37943022)

Yes, Windows sucks, but it's necessary to run the software my customers use, so there is no alternative.

VOTE WITH YOUR WALLET! Refuse to use that software and stand strong! /s

Re:overkill...but necessary (2)

bigstrat2003 (1058574) | more than 2 years ago | (#37943068)

It's software his customers use, so it's not his decision. If he refuses to support it, his customers will indeed vote with their wallets, but it won't be Microsoft that loses in that bargain.

Re:overkill...but necessary (0)

Anonymous Coward | more than 2 years ago | (#37943186)

It's software his customers use, so it's not his decision. If he refuses to support it, his customers will indeed vote with their wallets, but it won't be Microsoft that loses in that bargain.

I hope the MS fanboys understand one thing: THIS is why Microsoft sucks so bad. All you Softies who talk about features just don't fucking get it but you want to pretend you get it. They are all about the vendorlock because they are terrified of open competition on a level playing field. That should tell you how much faith they have in the merits of their products. Why you brainwashed fanboy apologists have more faith than they do is quite a mystery. Maybe you see them as "your team" so now they are an extension of your own ego? If so you are a useful idiot to their marketing machine.

If the discussion were about any sort of Unix or Unix-like system, you could just about substitute any one of them for another as a drop-in replacement. It is Microsoft that hates the kind of standards compliance that would mean having such choices. They don't want you to have a choice. That is not how they do business. They want to corner you. It's exploitative. It only happens because most end-users and PHBs who make these purchasing decisions have no clue about how different OSes and technologies work. It is the product of ignorance. People who are well informed might make different choices, might seen vendorlock for the trap that it is (and the customer-hostile practice that it is) and not get locked-in in the first place. Microsoft thrives on ignorance and lack of choice.

Re:overkill...but necessary (1)

shentino (1139071) | more than 2 years ago | (#37944428)

Not to mention hardware vendors that get away with making shitty circuitry and then hiding the problems behind windows only drivers.

Some info 4U (on Citrix/TS problems) (-1)

Anonymous Coward | more than 2 years ago | (#37943600)

I've been running & setting up TS (while it was Citrix even as far back as 1996 for Bell South's workforce, on laptops no less (Windows NT 3.51 over 56k modems) - was said to be impossible, but between NT & Ascend gateways, we did it!)

So, that said?

Well - IF You have problem apps that are "trashing you" or rather, your TS setups? Then, though this MAY BE A "BANDAID ON A BULLETWOUND", there are settings in Citrix/TS for "bad apps" that LIMIT them overrunning everything else & that's been present since the late 1990's, look into it!

IIRC, I did a post on Arstechnica about it, circa 2000/2001 iirc, here -> http://arstechnica.com/civis/viewtopic.php?f=17&t=1011866&start=40 [arstechnica.com]

SPECIFICS:

1.Run Regedt32.exe and locate the following key:
  HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\Current Version\TerminalServer\Compatibility\Applications

NOTE: The above registry key is one path; it has been wrapped for readability.

2.Double-click the Applications subkey to reveal several pre-defined settings. Select SETUP under the Applications subkey. The following values are displayed on the right side of the Registry Editor window:

FirstCountMsgQPeeksSleepBadApp:REG_DWORD:0xf

Flags:REG_DWORD:0x8
  MsgQBadAppSleepTimeInMillisec:REG_DWORD:0
NthCountMsgQPeeksSleepBadApp:REG_DWORD:0x5

This was VERY KEY to my research in it...

(Except I kept the attack in code where it SHOULD be done, saving this fix for issues in programs where the source is not available... the OS was not at fault, but the middleware, client load, citrix session, and Grids we used, WERE! Proof was in the pudding & success of it going from 100% cpu use & freezeups, to 2% & client loads of 500-1000 folks on it at once on a WAN with Citrix Session galore to the warehouses!)

On the registry hack?

No guarantee this is the RIGHT thing to do, this OS level Reg hack, taxing an already taxed out server either! Probably would work, but only as a last resort in my book at least, here's why:

Correct the code first if you can! Not the OS... it's NOT at fault usually!

Hacking the registry for this fix, is Bad Business, Bad Logic unless YOU HAVE NO CHOICE!

You correct the app according to MS/Citrix constraints on TS/WinFrame/MetaFrame design considerations & save this "BadApp" list for apps you CANNOT correct!

* There you go, hope that helps...

See - last time I ran into this was 2000/2001, & what REALLY helped later, was even better/simpler/smarter:

Later, I was a coder for a team on a clientserver program for multiple campuses using VB over TS... we had an app flooring us. Never happened on the local network though, only thru Citrix/TS, to remote campus miles away.

It ended up being because of the Oracle middleware we used (OO40, faster than ADO for writes back to the DB engine on Sun OS)...

That middleware driver, like most, was designed to MAX OUT cpu usage for speed, no time slicing/ceding back CPU time to other running processes!

(Which is FINE on normal workstations over a network, but not on CITRIX/TS - that's really as I am sure you know, a SINGLE SESSION, shared out to many "slave" ts nodes is why).

What cured it? Using SLEEP API calls on data processing loops that populated the shop floor's tracking systems grid controls, in loops... every so many loops, we would "sleep" it for 30ns or so for each shared TS session... problem went away!

Curing at the PROBLEM APP's LEVEL - the BETTER smarter way, absolutely

(However, as you know? You need sourcecode of said app to do it, & I programmed that one with 4 other guys over a year, roughly 1 million lines of VB code, & tons more in stored procs & SQL statements on Oracle too).

I.E.-> CPU usage on TS sessions was what was flooring it... & that was because we designed initially normally like you do ANY clientserver app, & we were NOT aware of how the middleware would behave under TS though!

(We found out, almost killed the project, & I got LUCKY finding the answer was sleep calls in VB dataprocessing filling loops... especially when std. DoEvent calls would NOT work, & oddly, that method's BASED on the sleep api call too!)

This may seem "dumb" also, but... you can always attempt to throw more "hardware" @ it (faster CPU's, SSD's, more RAM) as well... but costly @ the same time though, it can help, as I am sure you are aware of, provided your machines are not @ their "limits" already or your budget's exhausted! Upgrades, especially hardware ones, help... good luck & again, I hope that helps you.

* Yes, I KNOW what it's like on Citrix/TS @ times, & especially with apps that behave excellently on normal LAN/WAN setups, but not over citrix... up there above is some things that DO & CAN, help.

APK

P.S.=> As to the rest of your post? Windows HAS forms of virtualization, even without hypervisors (they have those too) - look @ Windows 7's taskmanager, & check off the "UAC Virtualization" column as visible - you can set ANY type of running process into a secluded registry area BY USER, rather than systemwide... for starters.

There's also apps like SandBoxie, that literally create analogs of *NIX 'sandboxing' (like chroot jail sort of) & of course, there is the MS hypervisor too (only works on MS OS though last I knew of)...

... apk

Re:overkill...but necessary (1)

gweihir (88907) | more than 2 years ago | (#37943806)

For Windows, I actually agree.

And I have seen one other important application on Windows: Assume you have some MS server software than can only handle 200 users or so (there are a few). If you have, say, 20'000 users but only ever 100 active concurrently, with a traditional Unix server software, you would just use it directly. With Windows, you can put 100 virtualized installations on the same hardware.

I am halfway convinced that this example is the real reason virtualization is so successful, namely lack of scalability in the MS world.

Re:OS design fail (0)

Anonymous Coward | more than 2 years ago | (#37943260)

Emphasis on sandboxing? Well, really, what there is is a set of OSes (UNIX-like ones, VMS, etc.) that had seperate users and groups for stuff that should be seperated (on a modern Linux distro, apache & mysql have their own users for instance), all of which are seperated from each other, can only read each other's files based on permissions, can not modify the kernel and such, or access memory outside their memory space. I.e. sandboxed. Chroot jails can also be used to ensure things stay seperated. But, since OSes like Windows do not use these capabilities (NT *does* have the permissions, they just don't seem to use them properly...) now it's like "Oh, apps have to be sandboxed!!!(one)1..." and this whole seperation is reimplemented in Java, or .NET, or app-container-of-the-week. Then, THAT wasn't trusted and now it's "Oh, completely seperate virtual machines are needed for security!!"

          I won't speak ill of virtual machines, I think they are nice and useful. But, I don't think they are a panacea (as Simon Crosb contends), and converseley they certainly don't create any more security headaches than having the similar number of physical machines.

Re:OS design fail (1)

gweihir (88907) | more than 2 years ago | (#37943784)

Actually privilege separation done right is far superior, as you can do application-integrated intrusion detection rather easily at the internal interfaces on the separation lines. Virtualization does not give you anything like that or the fine-grained access control model either.

Re:OS design fail (3, Informative)

jd (1658) | more than 2 years ago | (#37942536)

You're correct. A security kernel that is provably (and proven) correct is hard to design, but has been doable for a long time. Any "Trusted" (as opposed to "Trustable" - which means "you can't actually trust it at all") OS is built around a verifiable level of isolation. (For example, if prior to the Common Criteria, you'd wanted Linux to be an A1-class OS, you could have done it even though Linux wasn't specified out from the start. A1 was perfectly achievable if the security kernel alone was specified from the start and the rest of the OS was merely audited to prove everything went through it.)

Even that is unnecessary, though. GRSecurity went belly-up because there were not enough developers interested in it and no funding for it at all. Problems any of the commercial distros could have fixed in a heartbeat and any of the major vendors (IBM, you listening?) SHOULD have fixed in a heartbeat. That wasn't perfect isolation but it was vastly superior to what we currently have which is too limited in scope and too limited in usage.

Remember, though, this last bit only applies to Linux. Some of the BSDs have MAC of some sort, but not all, though all of them could have it tomorrow if they wanted.

Windows - the only relationship it has with MAC is the British image of a dirty old man in a raincoat. But even there, where was the necessity? It has a built-in hardware abstraction layer and a few other key areas that could, quite easily, have all linked up with a proper security kernel. Instead, we've got BS and I don't mean it earned a degree.

Re:OS design fail (1)

causality (777677) | more than 2 years ago | (#37943224)

Even that is unnecessary, though. GRSecurity went belly-up because there were not enough developers interested in it and no funding for it at all.

Do you refer there to a company that was also called GRSecurity? Because I'm running a Gentoo Hardened system right now with both PaX and GrSecurity integrated into the kernel (coupled with a hardened toolchain and various userspace features). That is one reason it was worthwhile to me to build from source -- well that and USE flags but this would be another discussion.

If the company going under was what caused the work of the same name to become GPL software, this may have actually increased its availability and usage.

Re:OS design fail (1)

jd (1658) | more than 2 years ago | (#37944200)

No, it was the GPL patch folk. If you look through the old news, you'll see the announcement that they lost their sponsor. They later announced - I think on LWN - that they were indeed stopping all work. Well, obviously they got the money they needed so fortunately I'm wrong in thinking that this had continued into the present day. Nonetheless, for a while they were zombified.

Re:OS design fail (2, Insightful)

Anonymous Coward | more than 2 years ago | (#37943434)

The higher security certifications start to have WEIRD consequences for a general purpose system, we went over these a bit in computer science.

          For instance, under the (apparently now obsolete) orange book ratings, C2 is pretty normal, NT4 (not on a network) was certified to this level, and a certified version of HP-UX, Irix, VMS, etc. were sold back in the day at level C1.

          To get a B1 rating? Well, for one example, "covert communications" channels are banned -- so, no pipes, no sysv shared memory .. but ALSO no conventional UNIX signals, a B1 OS cannot even tell you a load average, CPU usage, or other types of info "top" shows, because a process could modulate it's CPU usage or renice/unrenice itself to pass information covertly.

Re:OS design fail (2)

jd (1658) | more than 2 years ago | (#37944174)

In theory, there are exceptions. In practice, you're so close to 100% right that I'd need extended floats to find the exceptions.

By the time you get into the Bs and As, which is where MAC gets involved, MAC is considered to encompass ALL communications, ALL memory management as well as ALL program access, except where otherwise noted. (Orange Book doesn't cover all the uses of MAC, so the Orange Book definitions alone aren't enough.)

Thus, it is possible to have SYSV shared memory in B1, but all processes sharing memory have to have MAC labels such that you can't violate MAC through shared memory AND the memory being shared has itself to be in a region of memory that is authorized for access by that MAC label. In fact, you're supposed to have to security label even TCP/IP packets and not permit access control violations via regular networking. (This is in part why the anonymous coward's reference to MIC doesn't cover Orange Book-rated MAC.) Because nobody in their right minds implements Unix pipes or SYSV SHM with security labeling, only those in their left minds have non-covert forms of these. The rest of the population, as you correctly say, can't use top, kill, or almost anything else that forms the lifeblood of Unix use.

Because Orange Book MAC is bi-directional (unlike MIC, which is uni-directional), you cannot access material that is on either side of the MAC classification. This is good, in some respects. You can't transfer a virus from an unknown source to a destination that has a different MAC classification, for example. The net result is that anyone going for B-rated OS' under Orange Book is likely to go with the weaker-end of the spectrum. The higher-end is simply too restrictive or, because there's a hell of a lot of overhead involved in de-coverting all the Unix communications channels, too expensive on the CPU and on the wallet.

In consequence, even though solutions to the covert communications channel problem are permitted by the Rainbow series, almost nobody uses them.

Re:OS design fail (0)

Anonymous Coward | more than 2 years ago | (#37945194)

Whatever happened to the seL4 kernel?

Re:OS design fail (1)

jd (1658) | more than 2 years ago | (#37945204)

Good question. You've got me curious now, I'll look that up. It was one of the L4-based Linux microkernels?

Re:OS design fail (1)

NeoMorphy (576507) | more than 2 years ago | (#37942630)

I agree. Most people don't realize that a proper OS shouldn't need virtualization for security. They're basically saying that it's impossible to make an OS secure, and then they create a solution that is really an OS that can run other OSes. Except this OS is "different".

I can understand virtualization being used to consolidate multiple servers onto larger servers, you can use less network adapters and even aggregate them, decrease the network cabling/switch infrastructure. You can have multiple megaservers and move virtual servers to balance workloads or to recover from hardware failures or to migrate from old hardware to new. Essentially, you're replacing bulky infrastructure with chips.

But to use for security? That's as lame as installing anti-virus software because you know your OS can't handle security. And since Mcafee is in favor of this, I'm sure it's a scam to get companies to pay for yet another layer that can cause system problems that nobody can figure out.

Re:OS design fail (2)

Grishnakh (216268) | more than 2 years ago | (#37943188)

I can understand virtualization being used to consolidate multiple servers onto larger servers

Except that, in theory, you should never need to do this: if you have a bunch of servers running various processes, and want to consolidate them onto a single, larger server, you should be able to run all those processes at once on the big server. You shouldn't need to run separate OS instances for each one. The whole reason the timesharing multiuser system was invented was so that one computer could be used by lots of different people for different things all at the same time, without any of them affecting each other (except for resource constraints--the disk and CPU are shared).

The fact that we're turning to virtualization means that OSes have failed in their mission.

Re:OS design fail (2)

NeoMorphy (576507) | more than 2 years ago | (#37943950)

It's not the OS that failed, it's the applications. Different applications want the system settings changed to what they think is best, and you can't make them all happy. Granted, it should be possible, but today's application developers can be total idiots who have an egocentric view of the OS. I have Oracle support telling us we should increase the maxuproc to 16384, when it's obvious that the system will die long before that many oracle processes are running, which is defeating the purpose of maxuproc. "It's good practice", no it's not you checklist jockey. Networking settings are hard to set globally for everyone. You would think that any decent application would use setsockopt, but not too many do.

Another problem is applying patches. If you have too many applications running in the same virtual machine, forget about finding a common window to apply them. A lot of vendors aren't that quick about supporting the OS at the latest patch level, you can even have multiple applications that can't run on the same version of Linux or AIX because one requires the latest version and another hasn't been certified on the latest so they still only support a much older version.

Re:OS design fail (1)

dbIII (701233) | more than 2 years ago | (#37946056)

It's not the OS that failed, it's the applications. Different applications want the system settings changed to what they think is best

That's the sort of thing Solaris zones is for. Whether it delivers or not is a question for somebody else that has used them extensively.

Re:OS design fail (0)

Anonymous Coward | more than 2 years ago | (#37943958)

Only if your secure megaserver is the same OS as the former servers.

The biggest use we have right now at my work is consolidating multiple servers running different OS's under different names and configurations into beefier, healthier hardware while we gradually reduce, upgrade, or merge software together.

Admittedly there are some things that can be done, but what do you do when you have a legacy app that needs Sql 2000 and Office 2000 for its reporting system while you also need exchange 2010 for the mail server; and Office 2007 requirements for the primary business app and terminal services for 4 or 5remote connections? No matter the level of security of the underlying OS; conflicting requirements makes virtualization a good choice in condensing hardware at times.

Re:OS design fail (1)

Grishnakh (216268) | more than 2 years ago | (#37944008)

Right, but again this is an OS failure. It shouldn't (in theory) matter what version of an OS you have, as long as it's not too old; there should be no such thing as a "legacy app" that only runs on a legacy OS, it should be possible to run the old app on a new OS version without any issues whatsoever. The fact that this isn't the case shows that there's a giant failure in OSes.

Re:OS design fail (2)

drsmithy (35869) | more than 2 years ago | (#37944358)

Right, but again this is an OS failure. It shouldn't (in theory) matter what version of an OS you have, as long as it's not too old; there should be no such thing as a "legacy app" that only runs on a legacy OS, it should be possible to run the old app on a new OS version without any issues whatsoever. The fact that this isn't the case shows that there's a giant failure in OSes.

If someone writes their application to use deprecated (or, worse, undocumented) APIs and features, then its failure to run in more recent versions where said APIs and features no longer exist, or no longer have the same quirks, is not a failure of the OS.

The use of hardcoded paths is another major screwup applications developers seem to love making.

Re:OS design fail (1)

Grishnakh (216268) | more than 2 years ago | (#37944576)

Well first, if there's undocumented APIs, that's absolutely an OS failure. There shouldn't be any undocumented APIs, period. There's no good technical reason for such a thing.

Anyway, APIs shouldn't be deprecated. Programs written for the standard C library on a Unix system back in the 80s will probably still compile and run fine on a modern Linux system now. And if there's "quirks" in the APIs, that again is an OS failure; the behavior of every API should be documented and well-defined.

As for hardcoded paths, what applications have that problem? I've honestly never heard of that. That's definitely an application problem.

Re:OS design fail (1)

drsmithy (35869) | more than 2 years ago | (#37944720)

Well first, if there's undocumented APIs, that's absolutely an OS failure. There shouldn't be any undocumented APIs, period. There's no good technical reason for such a thing.

Of course there is. Functionality and features only meant to be used within the OS by other OS components and not by third party applications.

Anyway, APIs shouldn't be deprecated.

Why not ? Why should new capabilities always be tacked onto existing ones, building up an ever more fragile and complex environment ? Why should redundant or unnecessary functionality not be removed ?

You're essentially arguing API have to be done once, perfectly and never changed thence. Hardly reasonable.

Programs written for the standard C library on a Unix system back in the 80s will probably still compile and run fine on a modern Linux system now.

I doubt that's true for any code doing anything particularly complicated.

As for hardcoded paths, what applications have that problem? I've honestly never heard of that. That's definitely an application problem.

Hardcoded paths are rife in consumer software. Heck, Microsoft had to go out and build a whole emulation/redirection layer for Vista so they could make the shift to a least-privileged user by default without breaking applications trying to write to either system directories, or hardcoded user paths (eg: C:\Documents and Settings\$USER vs C:\Users\$USER).

Re:OS design fail (2)

causality (777677) | more than 2 years ago | (#37943306)

But to use for security? That's as lame as installing anti-virus software because you know your OS can't handle security.

I've said for some time that anti-virus is not security. It is damage control, at best. The way it is currently marketed and commonly used, it really is a terrible substitute for the inability of an OS to maintain security. As damage control it isn't even very useful because the only correct response to a successful intrusion is to reformat and reinstall from (read-only) media that is reasonably known to be good. It is only in the Windows world of ignorant users and routine infections that anyone desires to doubt this, and even then only as an excuse to avoid many more reinstalls than already occur (which includes licensing/activation hassles and then the joy of separately reinstalling each application with no central package manager). Yet the truth is, it is a general principle and Windows is not a special exception.

The real question is, when will the general public wake up to this fact? Given enough time I consider it inevitable. So, it's just a matter of when. I wonder how McAfee and Norton and others will respond then?

Re:OS design fail (1)

drsmithy (35869) | more than 2 years ago | (#37944488)

I've said for some time that anti-virus is not security. It is damage control, at best. The way it is currently marketed and commonly used, it really is a terrible substitute for the inability of an OS to maintain security.

They are two completely different aspects of security.

OS security is the fences, the gates and the locks. It's there to stop the bad guys getting in at all.

AV security is the motion detectors, the dogs and the security guards. It's there to try and minimise the damage once the bad guys are in.

Re:OS design fail (1)

goddidit (988396) | more than 2 years ago | (#37945782)

But to use for security? That's as lame as installing anti-virus software because you know your OS can't handle security.

I've said for some time that anti-virus is not security. It is damage control, at best.

Damage control is security at its finest. We do not aim for the theoretically secure and perfect locked-down-restricted-with-airgap situation if implementing that security would be more costly than the damages in case of a compromise.

Re:OS design fail (1)

OzPeter (195038) | more than 2 years ago | (#37942682)

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Sounds like the solution might be enforcing some sort of (hmm what would you call it?? Dirt box? Dust box?? ahh thats it!!) Sandbox on applications in order to achieve the isolation you desire.
 
I bet if I'm quick, then I might able to patent the iAmSparticus sandbox technique.

Re:OS design fail (1)

causality (777677) | more than 2 years ago | (#37943352)

If OSs hadn't failed so bad on isolation, we wouldn't need so much virtualization. "Virtual machine monitors" are just operating systems with a rather simple application API. Microkernels, if you will.

Sounds like the solution might be enforcing some sort of (hmm what would you call it?? Dirt box? Dust box?? ahh thats it!!) Sandbox on applications in order to achieve the isolation you desire. I bet if I'm quick, then I might able to patent the iAmSparticus sandbox technique.

Does Windows provide no functional equivalent to a *nix chroot? That would be a good place to start, especially if you can harden it against known methods of circumvention like you can with Linux and Grsecurity. Or would a chroot be as important when you're using an OS in which not everything is a file?

If Windows has no such function out-of-the-box, are there generic third-party sandboxes that can be used with any application? For example, I understand that the Chrome browser runs in a sandbox but I don't believe you could use this same sandbox to apply appropriate (different) restrictions to something like MS Office.

Re:OS design fail (2)

JamesTRexx (675890) | more than 2 years ago | (#37945246)

Try Sandboxie [sandboxie.com] .
I've had good success with running apps and games in a sandbox with it. The only thing it lacks (although it's better security wise) is being able to pipe files between the boxes so you'll have to install programs multiple times if it's needed in more than one box (think PDF reader, zip stuff, etc.).

Re:OS design fail (2)

causality (777677) | more than 2 years ago | (#37946006)

Try Sandboxie [sandboxie.com] . I've had good success with running apps and games in a sandbox with it. The only thing it lacks (although it's better security wise) is being able to pipe files between the boxes so you'll have to install programs multiple times if it's needed in more than one box (think PDF reader, zip stuff, etc.).

Thanks for the link. You can probably tell I don't use Windows myself and haven't for some time now (back in the day I used to dual-boot with Win98 until months went by without ever using the Windows system, so I reformatted it ext2 because ext3 didn't exist at the time). So, I'm not terribly informed about specific software available for that platform.

Still, am I the only one who thinks it's terrible, borderline irresponsible that Windows doesn't come with something like this out of the box? Configured to work with major browsers and other widely-used programs? I mean compared to writing the OS, how much more effort would that have taken on the part of Microsoft? In this age of widespread malware? It's a shame that Microsoft Security Essentials doesn't provide something like this that can recognize common programs and correctly sandbox them. At least for software that is also written by Microsoft like Office.

Re:OS design fail (1)

bmo (77928) | more than 2 years ago | (#37943122)

Uh...

Virtual machines started on Big Iron. You know, the places where "real operating systems" started.

VMs have nothing to do with the failings of operating systems and security is a /side effect/.

--
BMO

Re:OS design fail (2)

chentiangemalc (1710624) | more than 2 years ago | (#37943544)

But this just replaces the same issue - now I just hack the hypervisor or host of the virtual machines and I can gain control of all machines...

Re:OS design fail (1)

gweihir (88907) | more than 2 years ago | (#37943774)

I completely agree. Not only OS failure, also application development failure on top of that. Even today most academic programs producing people that will architect/design/write software do not include mandatory software security lectures. There are also whole important areas of operational security where virtualization does exactly nothing. One is preventing applications from being hacked and used as SPAM-relays or to hack other systems. For this you do not need a root-compromise, just hacking an application that is allowed to open network connections is quite enough.

And, as I found, sometimes rather worse stability. I have run into network problems with qemu, UML and KVM (not tries Xen so far). The same application does fine when run natively with the same kernels, so I can only assume flaws in the virtualized network hardware. I know of people having similar issues with VmWare.

My personal bottom line is that as with most other things, virtualized (i.e. faked) hardware is always inferior to real hardware.

Re:OS design fail (2)

Bengie (1121981) | more than 2 years ago | (#37943998)

It's not just about isolation, it's also about fail-over, live-migration, etc for any program without requiring the programs to understand.

Some of the biggest things virtualization can give is live-migrations and fail-over with no configuration.

Re:OS design fail (0)

Anonymous Coward | more than 2 years ago | (#37946112)

Lawl.. repeated myself. This is what happens at 11pm.

Re:OS design fail (0)

Anonymous Coward | more than 2 years ago | (#37946216)

Stating that a security problem can be resolved by an extra layer of abstraction (embeding its own critical vulnerabilities) is a joke.

Or, if it isn't a joke, that's a scam.

Re:OS design fail (1)

zmooc (33175) | more than 2 years ago | (#37946270)

OSs don't fail that bad at all. They are simply aimed at another task, namely making processes cooperate. A system designed for that task will never be the best solution for another task that aims to achieve the opposite, namely to make processes completely invisible to each other. Virtualization has nothing to do with OSs failing bad, they're just not designed to make a single piece of hardware look like 20 pieces of hardware you can rent out to 20 different customers.

VMMs therefore are not just operating systems with a rather simple application API; the simplicity of that API is one of their main features.

I doubt it... (2)

Yaa 101 (664725) | more than 2 years ago | (#37942368)

"While conventional wisdom says virtualized environments and public clouds create massive security headaches, the godfather of Xen, Simon Crosb, says virtualization actually holds a key to better security. Isolation — the ability to restrict what computing goes on in a given context — is a fundamental characteristic of virtualization that can be exploited to improve trustworthiness of processes on a physical system even if other processes have been compromised, he says"

Given the track record of the companies in IT, I really doubt his words.
It will probably become mass breaches of security made easy.

Re:I doubt it... (1)

Jailbrekr (73837) | more than 2 years ago | (#37942500)

I rolled my own RHEL5 desktop cloud. If an engineer does something stupid, the VM he has reserved dies and he reserves a new one. He doesn't impact the other virtual desktops and the VM that he crashed gets rebuilt from a single master image. This is the benefit of isolation, and it can be extended to security if you plan it right. It all boils down to the competency of the admins.

Re:I doubt it... (1)

Yaa 101 (664725) | more than 2 years ago | (#37942560)

Most people will not have the luxury of deploying their own cloud but are stuck in some IT company it's cloud.

Re:I doubt it... (0)

Anonymous Coward | more than 2 years ago | (#37943676)

Sure they have that luxury, if they want it. I'm sure there's plenty of cases where various groups all want root, but none of them actually have enough load to give an individual server a serious workout. In this case "the (so-called) cloud" could just be a single server with lots of RAM after all (with "lots of RAM" for strictly Linux VMs being what could be a normal amount of RAM for just a single 2008 Server instance...) Preferably with a spare box of course.

Re:I doubt it... (1)

gl4ss (559668) | more than 2 years ago | (#37945140)

the breaking security model probably comes from that now all the security is in one place.
it's just the cloud management that has to be broken and then they got everything.

virtualisation is neat for development though. but mostly it's still the old timeshare shit in new form.

Re:I doubt it... (0)

Alex Belits (437) | more than 2 years ago | (#37945816)

Congratulations, you are a VMWare jockey, a Windows admin that pretends he can manage Linux servers!

Please kill yourself.

Re:I doubt it... (1)

jd (1658) | more than 2 years ago | (#37942564)

His words are fine. You CAN use virtualization as a way to strengthen security, just as you can use concrete to make really strong structures. The problem is that concrete, on its own or poorly-utilized, is worthless for making much of anything.

what. (0)

Anonymous Coward | more than 2 years ago | (#37942468)

>trustworthiness of processes on a physical system even if other processes have been compromised

What.

You can't improve that.

It's zero.

Re:what. (2)

bws111 (1216812) | more than 2 years ago | (#37942624)

Zero? Based on what? IBM has EAL5 on their mainframe LPARs, which would seem to be more than zero trustworthiness.

Re:what. (1)

TheInternetGuy (2006682) | more than 2 years ago | (#37943016)

Does not a trustworthiness of zero, imply that there is infinite room for improvement?

Re:what. (0)

Anonymous Coward | more than 2 years ago | (#37945726)

That is self-evident, not by any relation to zero, but infinite untrustworthiness of human endusers and their ability to stand on the shoulders of their predecessors in development of new ways to circumvent security measures. Any form of security (especially related to computers) is inherently reactive, and can be nothing more.

Hacker gains access using new exploit, admin recovers and patches said exploit, hacker devises new exploit (with time the only variable), admin swears, rinses and repeats ad infinitum. Hackers (real ones) make the best admins when it comes to security, but its still a hopelessly lost battle of one man versus millions.

The overwhelming odds are that if you connect a computer to the internet, given sufficient time it will be hacked regardless of what measures you take, particularly if there is value for a hacker in gaining unauthorised access.

Hmm... (2)

fuzzyfuzzyfungus (1223518) | more than 2 years ago | (#37942488)

Is the "Godfather of Xen" the guy I need to talk to if I need the Buddha 'removed from this cycle of suffering and reincarnation', so to speak?

Re:Hmm... (0)

Anonymous Coward | more than 2 years ago | (#37942826)

No, I'm pretty sure this is the giant alien fetus you have to fight at the end of Half-Life. He sure wasn't thinking about how "virtualization means everything" when I split his head open and shot his brain.

Re:Hmm... (0)

Anonymous Coward | more than 2 years ago | (#37943580)

I'm pretty sure you were in god mode when you did that.

Re:Hmm... (2)

BitZtream (692029) | more than 2 years ago | (#37943072)

You do know the difference between Xen and Zen ... right?

Re:Hmm... (0)

Anonymous Coward | more than 2 years ago | (#37943602)

You do know the difference between Xen and Zen ... right?

The morbidly obese, end-user, group-thinking, lowest-common-demoninator, soft-minded, passive, functionally illiterate world is still struggling with such fine and advanced distinctions as loose/lose, there/their/they're, and legal vs. moral/ethical. They have yet to master these simple, basic, everyday things.

I think they have several more steps to take before they are ready to tackle Xen/Zen. Or any other branch of Mahayana Buddhism. Or any other branch of computer science. Baby steps, man. Baby steps.

Re:Hmm... (0)

Anonymous Coward | more than 2 years ago | (#37945398)

You might be thinking of Xen DomU, or XenU, for short.

ad infinitum (3, Insightful)

More Trouble (211162) | more than 2 years ago | (#37942656)

And if the current level of virtualization isn't secure enough, adding another virtual layer will certainly improve security even more.

Re:ad infinitum (1)

gweihir (88907) | more than 2 years ago | (#37943820)

And if the current level of virtualization isn't secure enough, adding another virtual layer will certainly improve security even more.

No problem, qemu supports this. And you can get really low execution speeds too!

Partioning and utilization (2)

_LORAX_ (4790) | more than 2 years ago | (#37942684)

To me the biggest security win with VM's is the ability to properly size a system for what it is actually doing. No more adding "just one more" service to a box because it's got more horsepower than it needs. By doing more logical partitioning of the software you limit the commingling of data, administration, and crash risk between different services.

Re:Partioning and utilization (1)

BitZtream (692029) | more than 2 years ago | (#37944672)

No more adding "just one more" service to a box because it's got more horsepower than it needs

Yet, with virtualization, that is EXACTLY what you're doing. The only difference is that you're not just adding an Apache instance to the machine as that 'one more service' you're also adding an entire OS as well.

doing more logical partitioning of the software you limit the commingling of data, administration, and crash risk between different services.

Isn't that what your OS is supposed to be doing? Why do you think another layer can do something that the one you're already using is incapable of.

If you're the hypervisor (0)

Anonymous Coward | more than 2 years ago | (#37942808)

who will hypervise the hypervisors?

Re:If you're the hypervisor (1)

Alex Belits (437) | more than 2 years ago | (#37945822)

It's hypervisors all the way down.

And one Windows desktop has to manage all of them.

I know why without reading anything! (2)

BitZtream (692029) | more than 2 years ago | (#37943048)

Godfather of Xen On Why Virtualization Means Everything

Well, HE thinks it means everything because without it meaning everything he is irrelevant.

He also seems to think his OS is different than every other OS that came before it.

Virtualization is just another layer of software to exploit, the real problem is that it allows idiots who may have separated services onto physically separate devices due to incompatibilities with various bits of installed software on the machines, now they are once again back on the same hardware with shared memory ...

Virtual machines are useful for utilizing under utilized hardware for doing trivial things you wouldn't want to waste full hardware for and that are unimportant. ISPs are a great place for virtualization as they let the ISP 'sell a machine' with a lot less effort than would traditionally be required. Using the current 'virtualization' tech for security purposes just shows your ignorant.

Adding more software and bugs does not add security, especially since you're just doing the exact same thing the original OS was supposed to do. So your argument becomes 'I'm better at it than you', and when ever that happens I run the other direction as fast as possible. If you have to tell me you're important, you aren't.

Re:I know why without reading anything! (2)

Bengie (1121981) | more than 2 years ago | (#37943736)

"Virtualization is just another layer of software to exploit, the real problem is that it allows idiots who may have separated services onto physically separate devices due to incompatibilities with various bits of installed software on the machines, now they are once again back on the same hardware with shared memory ..."

There are many real world scenarios that are currently only supported by virtulization. If all these people think virtulization is such a crutch, then they can solve the problem. Currently they have no answers and they only QQ

Re:I know why without reading anything! (2)

gweihir (88907) | more than 2 years ago | (#37943852)

I completely agree. And from what I have seen so far, the available virtualization systems are all actually less reliable than the same OS run on bare hardware (at least if the OS is Linux ;-). That would also imply they are less secure. For that reason, I don't think you can regard virtualization that runs as root as much of a security/isolation gain. It may even represent a net loss, except that the attackers have to invest a bit more into research. But they may gain portable attacks as a benefit.

The two I know that are different are QEMU (runs as user and should be really, really hard to break out of, slow though as it is full software emulation) and UML (basically a kernel that runs as a user-process). Both do not run as root and UML even has decent speed. I use both for isolation purposes.

Re:I know why without reading anything! (1)

GPLHost-Thomas (1330431) | more than 2 years ago | (#37945602)

I completely agree. And from what I have seen so far, the available virtualization systems are all actually less reliable than the same OS run on bare hardware (at least if the OS is Linux ;-). That would also imply they are less secure.

Citation needed. Please care showing everyone grave security exploits in Xen (as much as I know, there's none that are very serious if you don't use PCI passthrough, just few hard to exploit DoS).

For that reason, I don't think you can regard virtualization that runs as root as much of a security/isolation gain. It may even represent a net loss, except that the attackers have to invest a bit more into research. But they may gain portable attacks as a benefit.

One thing you seem to fail to understand, is that to get this type of exploit, you'd need to 1/ get root (hard already) then 2/ get to the hypervisor level. So please explain to me how this is LESS safe, when you have 2 layers of exploits to find instead of just one.

Re:I know why without reading anything! (1)

GPLHost-Thomas (1330431) | more than 2 years ago | (#37945560)

Virtualization is just another layer of software to exploit

But it's a WAY smaller than the kernels you may run. On my laptop, Xen is a bit over 650KB, but the initrd image for my kernel is about 11MB. That is, 16 times smaller. I believe that Xen is more than 16 times safer than the kernel, since absolutely zero "root" exploits have been found (if you don't use PCI passthrough, which historically, has been quite worrisome).

Adding more software and bugs does not add security, especially since you're just doing the exact same thing the original OS was supposed to do.

The point is, Xen doesn't. It does only virtualization, not drivers, where most of the security exploits have been found.

So your argument becomes 'I'm better at it than you'

It's not that, I'm afraid. The Linux kernel is full of various useless, but yet exploitable, (often old) device and (often old) protocol drivers, written by various people from various vendors that maybe didn't care much but to do an (as fast as possible) quick and dirty work. Just look at exploits history, and you'll see what I'm talking about. The Xen hypervisor doesn't have to suffer from this at all: the set of contributors is much smaller, the code-base is smaller too.

But that's only one side of it and not the main one. Often, you'd get root on a server because one of the service that is running has flows. Not because of a kernel root exploits. In this case, having things isolated means you'd get root only on a server running a single service. For example, you wouldn't have root access to the MySQL database files if it was installed on another server. And that's the main point everyone is making about security: exploits have limited consequences, since not everything is running on the same server.

Re:I know why without reading anything! (2)

TheRaven64 (641858) | more than 2 years ago | (#37945850)

On my laptop, Xen is a bit over 650KB, but the initrd image for my kernel is about 11MB.

Two things here. First, the initrd image is a RAM disk containing a recovery filesystem. If you want to compare it to Xen, you need to compare it to the Xen admin tools as well as the kernel - and they are written in Python so come with 5MB of Python dependencies before you even get them to start. Secondly, 90% of the size of any modern kernel is device drivers. Xen does not contain any device drivers - it delegates all of that to the domain 0 guest (or to multiple driver domains).

I'm actually quite surprised that Xen is 650KB. That seems a lot bigger than it should be for what it actually does: schedule VMs to run on the physical CPUs and allocate pages of physical memory to them.

Often, you'd get root on a server because one of the service that is running has flows. Not because of a kernel root exploits. In this case, having things isolated means you'd get root only on a server running a single service

This makes no sense. If you compromise, say, Apache, then you can control Apache, but nothing else. Apache runs in a chroot, so you can't even see the rest of the filesystem. You only get root if there is also a local privilege escalation vulnerability in the kernel, or if the user was stupid enough to run Apache as root (and if they're going to misconfigure services in an OS, what makes you think they won't misconfigure VMs?).

BSD Jails (0)

Anonymous Coward | more than 2 years ago | (#37943220)

I can't believe that nobody here has mentioned BSD Jails. A spectacular display of the tripe incompetence that slashdot has become.

You guys better go bitch and "Occupy Wallstreet" instead of working and reading.

purpose of VMs (0)

Anonymous Coward | more than 2 years ago | (#37943388)

Aren't VMs more about reiability (availability) and protecting the user from themself?

Its already been alluded to in other comments, but if something stuffs up on a pyewta, sometimes it requires a restart, or if a pyewta is infected by a virus, sometimes it requires a rebuild. Since the whole "cloud" phenomenon is about taking server resources out of small businesses and the like and putting them into big datacenters where they belong, its pretty hard for a user to fix their pyewta if its 1000 miles away or on a different continent. Using VM tools to blow away the broken VM and start a new one seems like a pretty useful feature since it reduces the amount of work for the datacenter and hence manhour costs can be reduced.

Also, from what I've learned VM failover can be (virtually) instantaneous with no loss of sessions or program states (using live migration), which is pretty important given the lack of patience of most computer users nowadays.

Its also pretty hard to make a perfectly secure OS because everybody knows the user is the most untrustworthy element of the system, but they can also be the most ingenious clever buggers, and trying to predict how a user might exploit your server is a futile crysal ball exercise.

Creator of X says that x is the best thing (2)

tokul (682258) | more than 2 years ago | (#37943884)

In other conferences Microsoft says that Windows Advanced server is the best tool for the job, drug dealers show benefits of increased cocain use and Hitler says that final solution to the Jewish question improves German ecosystem.

Virtualization also leads to resource overbooking. If I run on two physical X5355 Xeons, I know that I have two physical X5355s at my disposal. If I run on two virtual X5355, I can't tell if provider does not use same X5355s for other clients.

Re:Creator of X says that x is the best thing (1)

bWareiWare.co.uk (660144) | more than 2 years ago | (#37945286)

If you can't tell then why dose it matter?
If you can tell, but your provider is lying then you have bigger problems anyway.

Re:Creator of X says that x is the best thing (1)

TheRaven64 (641858) | more than 2 years ago | (#37945858)

Simon Crosby is not the creator of Xen. It was created by Keir Fraser while he was doing his PhD, under supervision by Ian Pratt (it was actually created as the result of a drunken bet between Keir and Ian). They then went on to found XenSource, which was bought by Citrix. Simon Crosby (yes, his name does have a y on the end - well edited Slashdot) was brought in to do marketing for XenSource. He had very little to do with the technical side.

Bad day for Unix (0)

Anonymous Coward | more than 2 years ago | (#37944016)

First Fedora guys said Unix is bad -- you bad, bad, bad girl, your file systems are all bad, bad, bad.

Next VM (not the real VM, the Virtual Memory VM, the virtual VM) guy says -- you bad, bad, bad girl your protections are bad, bad, bad (I don't care how I write my programs, though).

Unix is going to cry all day and all night.

Who Says That? (1)

bill_mcgonigle (4333) | more than 2 years ago | (#37944862)

"While conventional wisdom says virtualized environments and public clouds create massive security headaches

Huh? Nobody I know understands this to be 'conventional wisdom'. What are they smoking?

the godfather of Xen, Simon Crosb, says virtualization actually holds a key to better security. Isolation

Yeah, we all knew that a decade ago. My simple SOHO office server is in the process of migrating from two linux boxes to one VM server with 8 VM's for role isolation. I'm no visionary or security genius - I did this for clients 3-4 years ago (I had to wait for hardware prices to fall for in-house stuff) when the technology became commodity and performant.

Still an open research question (1)

Kanel (1105463) | more than 2 years ago | (#37945338)

you want to virtualize a computer, run the program and then check that:
* the computations have not been hampered with
* nobody has been snooping in your computations
This goal is currently out of reach. It is an open problem in computer science if it's even possible!

The exact term is "encrypted computation". Imagine if you could not only encrypt a file, but run it after it's been encrypted! You could send the file to some cloud and run it there, without revealing _what_ is being computed or what data you use. You get the result back and safely decrypt it on your own PC.
Now if someone in the cloud tried to attack your computer program, with a buffer overflow say, or the hardware it ran on was faulty, the encrypted result would be garbage and you wouldn't be able to decrypt it. That's actually great, because it gives you a way to check if the program ran correctly or not. Just like how checksums assure you that a file has been transmitted correctly. If we had this capability, we could run any program on fast, cheap, but error-prone hardware. We could run anything on graphic cards, which make a mistake now and then, overclock CPUs far more than today, or maybe even run faster and cheaper hardware that nobody has yet built, because it would be too error prone.

You come here on my daughter's wedding day... (0)

Anonymous Coward | more than 2 years ago | (#37945472)

So what you're saying is that the Godfather of Xen has an offer we can't refuse?

Attack surface (1)

Ramin_HAL9001 (1677134) | more than 2 years ago | (#37945690)

The OS+hypervisor has a larger attack surface than the OS alone, period. Unless you can prove your hypervisor is un-hackable (don't make me laugh), a virtualized system is less secure.

Even Windows, at the kernel level, is quite secure, and should be more secure than using it with a hypervisor; even a hypervisor made by Microsoft for Windows (or should I say "especially a hypervisor made by Microsoft") will be less secure than the OS alone.

Face it, most modern operating systems are secure enough to run on metal without ever allowing unauthorized access to hardware. The real hacks to worry about are at the application level and the human level, and virtualization has nothing to say about isolation there.

If Crosby were making the case that virtualization makes it easier to manage operating system instances and thus reduce human error in cloud-computing services, I would agree with him. But isolation provided by a hypervisor will never be more secure than a properly designed and tested OS running on metal.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...