×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Researcher Releases Hardened OS "Qubes"; Xen Hits 4.0

timothy posted about 4 years ago | from the past-the-rotating-knives-yes dept.

Operating Systems 129

Trailrunner7 writes "Joanna Rutkowska, a security researcher known for her work on virtualization security and low-level rootkits, has released a new open-source operating system meant to provide isolation of the OS's components for better security. The OS, called Qubes, is based on Xen, X and Linux, and is in a basic, alpha stage right now. Qubes relies on virtualization to separate applications running on the OS and also places many of the system-level components in sandboxes to prevent them from affecting each other. 'Qubes lets the user define many security domains implemented as lightweight virtual machines (VMs), or 'AppVMs.' E.g. users can have 'personal,' 'work,' 'shopping,' 'bank,' and 'random' AppVMs and can use the applications from within those VMs just like if they were executing on the local machine, but at the same time they are well isolated from each other. Qubes supports secure copy-and-paste and file sharing between the AppVMs, of course.'" Xen's also just reached 4.0; some details below.Dominik Holling writes "With a small announcement on their mailing list, the open source community hypervisor Xen has reached the official release of version 4.0.0 today. The new features are: 'blktap2 (VHD support, snapshot discs, ...), Remus live checkpointing and fault tolerance, page sharing and page-to-disc for HVM guests, Transcendent memory (http://oss.oracle.com/projects/tmem/).' A complete list of all changes can be found on the Xen wiki and the source can be found on the official website and the Xen Mercurial repositories."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

129 comments

sounds promising (1, Redundant)

Dayofswords (1548243) | about 4 years ago | (#31764350)

I wonder how it will be when it hits stable, and what support it will have for devices

Re:sounds promising (-1, Troll)

Anonymous Coward | about 4 years ago | (#31764408)

I'm writing a driver for it to support the anal dildos i'm going to jam into your asshole.

Re:sounds promising (5, Funny)

metamechanical (545566) | about 4 years ago | (#31764438)

And that, ladies and gentlemen, is the TRUE spirit of free software.

Re:sounds promising (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#31764464)

But why use a dildo? It's redundant imo, fingers work just as well.

Re:sounds promising (-1, Offtopic)

Lunix Nutcase (1092239) | about 4 years ago | (#31764512)

Sanitary reasons? Would you really want to stick your fingers up some random persons ass?

Re:sounds promising (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#31764574)

I'm going to AC this, but why not? A lot of people have anal sex and might sometimes put a finger up their girlfriends ass too. I have at least. When I fingered my own ass I used one of those rubber gloves, they work too. Or a condom. Just use a little bit of imagination.

Re:sounds promising (-1, Offtopic)

Lunix Nutcase (1092239) | about 4 years ago | (#31764646)

Did you read my post?

Would you really want to stick your fingers up some random persons ass?

I'm pretty sure that your girlfriend isn't just some random person.

mnb Re:sounds promising (-1, Flamebait)

Anonymous Coward | about 4 years ago | (#31764678)

Would you really want to stick your fingers up some random persons ass?

I'm pretty sure that your girlfriend isn't just some random person.

She is to all the Johns at the corner.

Re:mnb Re:sounds promising (-1, Offtopic)

Lunix Nutcase (1092239) | about 4 years ago | (#31764716)

Well then you should be even more worried about sticking something in her orifices.

Re:mnb Re:sounds promising (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#31764934)

The risk makes it more exciting.

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31764728)

Tell me where I can get an 8 inch, ribbed vibrating finger.

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31764788)

A dildo?

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31764474)

that was a simple accurate response to mr dayofswords and his lack of knowledge and/or first hand experience... xen works well.

Re:sounds promising (0, Offtopic)

metamechanical (545566) | about 4 years ago | (#31764500)

That may be, but its clearly lacking driver support for certain models of anal dildos. But not for long!

Re:sounds promising (-1, Offtopic)

Anonymous Coward | about 4 years ago | (#31764590)

That is actually a common misconception in the open source community. All dildos work well in an anus as well as they do in a vagina. Although there are no vaginas in the open source community.

Re:sounds promising (-1, Offtopic)

sopssa (1498795) | about 4 years ago | (#31764694)

That is actually a common misconception in the open source community. All dildos work well in an anus as well as they do in a vagina. Although there are no vaginas in the open source community.

Uh, I wouldn't be so sure about. What about those rabbit dildos [blacklabelsextoys.com] that have the external part for pleasuring clitoris? Even while men do have the sensitive area called grundle (between your balls and asshole), it doesn't work for that but is only in the way. Then theres also dildos that are in a certain shape to pleasure women's g-spot, which will only hurt in ass. So no, they all don't work. That's why there are specific dildos designer for ass, either mens or womens.

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31764790)

So no, they all don't work. That's why there are specific dildos designer for ass, either mens or womens.

Most... off... topic... post.... EVER!!!!

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31764906)

Yunno, you really don't need to show off ALL your knowledge.

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31765304)

that isnt even remotely all his knowledge. his many years of being the butt slut of ballmer have taught him a ton about anal play and toys

What a FLARE is for.... (1)

llamafirst (666868) | about 4 years ago | (#31766206)

That is actually a common misconception in the open source community. All dildos work well in an anus as well as they do in a vagina.

This is misleading. Ideally all butt tools have a FLARE (getting wider on the outside part). That's because objects are, ummmm, how do you say this, more likely to be sucked up and LOST back there.

That's the most important design difference with such equipment.

Stay safe and out of the emergency rooms, everyone!

Re:What a FLARE is for.... (0)

Anonymous Coward | about 4 years ago | (#31766622)

You know it's rather disturbing that despite the high incidence of perpetual virginity among slashdotters that so many of you know way too much about anal play. Are you all trying to emulate your hero goatse?

From a pretty lady too (0)

Anonymous Coward | about 4 years ago | (#31768724)

Check that out boys!

Re:sounds promising (1)

ArcherB (796902) | about 4 years ago | (#31764794)

I'm writing a driver for it to support the anal dildos i'm going to jam into your asshole.

What port does that plug into....?

OH wait, never mind.

Would you really need a driver for that? Wait, what kind of "driver" are we talking about here? Would a mallet work? Kinda gives "RAM-drive" a new meaning, eh?

(I better stop)

Re:sounds promising (0)

Anonymous Coward | about 4 years ago | (#31765162)

THAT's why I always read the EULA...

Virtualization doesn't work vs. file macrovirus (3, Interesting)

rsborg (111459) | about 4 years ago | (#31764430)

A document that's infected would still need to be opened, and thus presents a vector that needs to be scanned against. Given the recent PDF exploit issues, I think this is still an large attack profile... still necessitating virus scanners (and app firewalls).

Still, this is still a great advancement... will be interesting to see what performance impact this has.

Re:Virtualization doesn't work vs. file macrovirus (1, Interesting)

sopssa (1498795) | about 4 years ago | (#31764528)

Sure, but nothing works against every threat. You will never discover a single perfect defense. It doesn't mean its useless to harden the single parts, like in this case OS components to keep rootkits away.

It's dead easy to make a rootkit for the existing operating systems. In Windows you need to hook the system API's. In Linux it's even easier - just replace the system executables. You even have a source code for them, making it really easy to add a simple code that checks if file/process/etc has certain text like "~abc~" and then just ignore that file.

Re:Virtualization doesn't work vs. file macrovirus (0)

Anonymous Coward | about 4 years ago | (#31764760)

It's dead easy to make a rootkit for the existing operating systems. In Windows you need to hook the system API's. In Linux it's even easier - just replace the system executables.

Clearly you know nothing whatsoever about Linux.

You even have a source code for them, making it really easy to add a simple code that checks if file/process/etc has certain text like "~abc~" and then just ignore that file.

LOL

Re:Virtualization doesn't work vs. file macrovirus (1)

sopssa (1498795) | about 4 years ago | (#31764852)

Care to tell me why that doesn't work? It works for me. Sure you also need to hook some calls, but even that simple thing tricks most people, including good admins.

Re:Virtualization doesn't work vs. file macrovirus (2, Informative)

BitZtream (692029) | about 4 years ago | (#31766350)

Its pretty easy to make a rootkit for any PC based OS ... the real problem is getting it loaded before the main OS. Contrary to popular belief, even with the advent of hardware virtualization helpers, boot viruses that hid themself away from the main OS are nothing new and have been around probably longer than you've owned your own computer.

The rootkit simply has to be first, after that theres nothing anyone can do.

Re:Virtualization doesn't work vs. file macrovirus (2, Interesting)

0ptix (649734) | about 4 years ago | (#31764600)

I guess the idea is to run adobe PDF in it's own VM that has minimal permissions. Then when it is infected the virus can't access anything the VM isn't allowd to. (grsec let's you do something like that with the ACLs, roles and users.) For example if you do updates manually then you can restrict the VM of adobe Reader to not have net access. so the virus couldn't contact the outside world either. it's all a question of how much you're willing to compartmentalize your system. the more hassel you are willing to deal with the more secure you can get it (up to a point). This also addresses another comment in this discussion that claims that shoddy coding is not addressed by this fix. the idea of this and similar OSs is to limit the damage after said shoddy coding has been exploited. Not to fix the actual exploitation of that coding.

What i don't get is how this systems is really different from the ACL system of other secure OSs such as grsec... i mean on a high level it seems to be about the same thing. each is a different solution to the same problem of how to compartmentalize your system. in this light what advantage does the new VM based system have over grsec? compartmentalization at a lower level maybe? i.e. even internal to the kernel etc. I don't know enough about how grsec is implemented to know if that is really a difference...

Re:Virtualization doesn't work vs. file macrovirus (3, Informative)

Jahava (946858) | about 4 years ago | (#31764882)

I think the idea is that you'd run different domains to protect different sets of files. You'd run your tax software in a "tax" domain, and if any PDF software got infected, it wouldn't be able to touch the "tax" domain information.

Versus locked-down operating systems, you have a valid point (and my personal issue with this approach). However, it's not without its advantages. In a standard Linux system, every userspace process has access to around 330 system calls. Each one of these is an interface into the kernel, and a bug in even one of them is enough to take over the kernel. Furthermore, any application that can load kernel modules can potentially dominate the kernel.

In the Qubes system, each domain is protected by a virtualization layer. It does have domainhypervisor interfaces (similar to system calls) to allow I/O, graphics, and the copy-paste subsystem to run, but there are a lot fewer of them. They are oriented around a finite functionality - the aforemented I/O, graphics, etc., while system calls must exist for all userspace functionality. Therefore, as userspace applications get more complex and system calls (per-domain) increase in number and complexity, the domainhypervisor interface will be more or less static. This hopefully leads to them being easier to secure and lock down.

Re:Virtualization doesn't work vs. file macrovirus (1)

Zerth (26112) | about 4 years ago | (#31764730)

It does if you revert the VM after you are done. Nothing gets saved unless the infectious agent can break out of the VM. At worst, it'll send some spam if you allow the document reader VM net access.

Re:Virtualization doesn't work vs. file macrovirus (3, Insightful)

99BottlesOfBeerInMyF (813746) | about 4 years ago | (#31764798)

A document that's infected would still need to be opened, and thus presents a vector that needs to be scanned against.

If the PDF viewer is running in a separate VM container, however, what exactly do you think it's going to accomplish? Read your other PDFs? sure. Delete them even? Okay. But since you probably did not give that VM access to your network it's unlikely to be able to do anything actually beneficial to a malware writer.

...still necessitating virus scanners (and app firewalls).

Well, virus scanners are a bonus, although not a lot of use on Linux given the amount of malware out there. Configuration of VMs takes over a lot of the same task as application level firewalls here, although the overhead tradeoffs of each approach should be looked at.

Re:Virtualization doesn't work vs. file macrovirus (0, Troll)

turbidostato (878842) | about 4 years ago | (#31765738)

"If the PDF viewer is running in a separate VM container, however, what exactly do you think it's going to accomplish?"

Well, provided that "...Qubes supports secure copy-and-paste and file sharing between the AppVMs" I'll let it opened to your own imagination.

Re:Virtualization doesn't work vs. file macrovirus (1)

Jah-Wren Ryel (80510) | about 4 years ago | (#31766806)

Well, provided that "...Qubes supports secure copy-and-paste and file sharing between the AppVMs" I'll let it opened to your own imagination.

One narrowly defined point of access to the other VMs is orders of magnitude easier to secure than the way it works now. It's the security equivalent of putting all your eggs in one basket and then watching that basket really, really closely.

Re:Virtualization doesn't work vs. file macrovirus (1)

istartedi (132515) | about 4 years ago | (#31767748)

I'd rather have my machine be a spambot for an hour than lose my PDFs.

Worst case scenario for being a spambot is that I get taken off the network for a few minutes. My PDFs? Maybe it's an archive of patents I need to review, the instructions for the fromulator, or my master's thesis. Certainly, they should all be backed up, and we should all test our *restore* from backup. That's the only real security, but I don't want deletion to happen lightly.

In general, do apps really need delete permission anyway? How about just giving them change permission. In other words, something like a local svn commit. Then, if something funny happens you can just roll back. I think Macs come with something like that built in... the name escapes me.

Re:Virtualization doesn't work vs. file macrovirus (1)

99BottlesOfBeerInMyF (813746) | about 4 years ago | (#31768790)

In general, do apps really need delete permission anyway? How about just giving them change permission. In other words, something like a local svn commit.

Agreed. And Qubes does provide the ability to revert an entire VM, so even if your PDFs were all deleted or corrupted, it should not be permanent if you catch it in time.

I think Macs come with something like that built in... the name escapes me.

Macs do have a nice, built in versioning called "Time Machine" but most users do not buy the required external drive to make use of it.

Re:Virtualization doesn't work vs. file macrovirus (0)

Anonymous Coward | about 4 years ago | (#31767782)

Hypervisors like Xen have never claimed to scan or detect malware. The hypervisor simply provides VM isolation guarantees, so the malware will be contained and the integrity of other VM's on the platform will be unharmed.

Re:Virtualization doesn't work vs. file macrovirus (2, Insightful)

Fred_A (10934) | about 4 years ago | (#31768450)

>Still, this is still a great advancement... will be interesting to see what performance impact this has.

Current machines (with the possible exception of so called "netbooks") are so insanely fast that the performance impact of a virtualised environment doesn't matter much save for a few very specific applications : games, graphic processing, etc. Not what typical users require. And there are ways to lower the impact when running a high requirement application. It will require a bit more RAM (if even that), but current machines are certainly adequate CPU-wise.

This is IMO one potential direction that OS architectures may have to follow in order to become more resilient in the face of a growing number of threats. I think it would be much more manageable for the average user than something like SELinux. The old permission system isn't in itself sufficient because users cannot be trusted and may "voluntarily" allow malicious applications. So sandboxing everything is reasonable.

Re:Virtualization doesn't work vs. file macrovirus (1)

BikeHelmet (1437881) | about 4 years ago | (#31770214)

will be interesting to see what performance impact this has.

Performance impact?... performance improvement!

Now virus scanners can target specific scanning methods to specific VMs! Oh sure, there's some VM overhead - but think of the efficiency other software (like firewalls and virus scanners) gain by having everything segmented like this?

Hardware sandboxing (2, Interesting)

EdZ (755139) | about 4 years ago | (#31764450)

With the proliferation of multi-core CPUs and GPU clustering, I wonder how long until VMs simply become entirely separate physical systems sitting on your motherboard.

Re:Hardware sandboxing (1)

Junior J. Junior III (192702) | about 4 years ago | (#31764726)

With the proliferation of multi-core CPUs and GPU clustering, I wonder how long until VMs simply become entirely separate physical systems sitting on your motherboard.

Yeah, I bet we could really accelerate the performance of these virtual systems if we could run them on dedicated hardware.

Re:Hardware sandboxing (3, Funny)

kgo (1741558) | about 4 years ago | (#31764864)

Then they'd just be M's... ;-)

Re:Hardware sandboxing (1)

MBGMorden (803437) | about 4 years ago | (#31766144)

Indeed. Circular logic is just surprising sometimes.

"Hey guys - wouldn't it be awesome if we setup the VM's so that each one of them had their own dedicated hardware!"

About as bright as the time one of our web guys decided to use DNS to assign all his servers a name based on their serial number - and then started asking if there was any way to assign a name to each one that was easier to remember.

Re:Hardware sandboxing (1)

tepples (727027) | about 4 years ago | (#31767496)

"Hey guys - wouldn't it be awesome if we setup the VM's so that each one of them had their own dedicated hardware!"

A blade server can do that.

About as bright as the time one of our web guys decided to use DNS to assign all his servers a name based on their serial number - and then started asking if there was any way to assign a name to each one that was easier to remember.

I believe that's called a CNAME.

Re:Hardware sandboxing (2, Funny)

meatpan (931043) | about 4 years ago | (#31767922)

This will probably happen sometime in the 1970's with the IBM's z-series. Also, in the 1990's Sun might introduce their own hardware isolation through a product called Dynamic System Domains. It's hard to tell though. I think the future is going to be rough for Sun.

Sandboxes and Jails (2, Interesting)

bhima (46039) | about 4 years ago | (#31764460)

I'd like to read a serious comparison between this and jails in FreeBSD and sandboxes in Mac OS.

I think a lot of these ideas have been around for a very long time but they are such a pain in the ass, very few people actually use them.

Re:Sandboxes and Jails (1)

vlm (69642) | about 4 years ago | (#31765224)

I think a lot of these ideas have been around for a very long time

Ya think so? At least since 1972? And VM is still in active use and under development?

http://en.wikipedia.org/wiki/VM_(operating_system) [wikipedia.org]

Of course you have to violate 104 IBM patents to run it on an emulator, but still...

Unlike copyrights and trademarks, patents expire (1)

tepples (727027) | about 4 years ago | (#31767534)

At least since 1972? [...] Of course you have to violate 104 IBM patents to run it on an emulator

Unlike copyrights and trademarks, patents expire. So anything invented before 1990 is no longer patented. Or was this supposed to be a joke?

Re:Sandboxes and Jails (1)

Just Some Guy (3352) | about 4 years ago | (#31765718)

I think a lot of these ideas have been around for a very long time but they are such a pain in the ass, very few people actually use them.

I use FreeBSD jails all the time. Want a fresh, new environment for testing? ezjail-admin create testenvironment.example.com 10.20.1.5, ssh in, and start working on it. My understanding is that you're limited in practice to several tens of thousands of jails per machine, but I haven't bumped up against that yet.

Won't work (1, Insightful)

jmorris42 (1458) | about 4 years ago | (#31764470)

This idea is an example of failing to understand the problem.

The problem with security comes from several primary sources:

1. Complexity. Too many layers with poorly understood security implications. This lady might actually understand the monster she spawned but no admin trying to implement it will understand all of the corner cases. See SELinux.

2. Shoddy coding. So this gets tossed over the wall and will (assuming it is to matter) be completed by people who don't really understand it. Unless this one proves an exception it won't ever get a proper top to bottom security audit of the codebase. So it will have all the bugs in Linux, Xen and the hardware bugs in the virtualization layer and then it will add a whole new set of bugs to exploit.

And this one adds the fact it doesn't even try to secure the apps, it tries to stop misbehaving apps (like SELinux) from accessing things it shouldn't If history shows anything, giving an attacker any access to run code locally gives them all they need to leverage it into root eventually.

Re:Won't work (4, Insightful)

Archangel Michael (180766) | about 4 years ago | (#31764796)

1) Any system simple enough that anyone can use it, is either a toaster, or won't be useful in any customized way.

2) Coding doesn't need to be "shoddy" to be a security risk. It just simply needs to fail to realize the edge cases nobody thought of when writing the code. If you make the code complicated enough and run enough checks, it becomes complicated mess that nobody wants to use.

The problem with security is one of optimizing the risk to the amount of protections built into the system. Back in DOS days, I'm sure that DOS was insecure from many many levels, however because it was standalone, the security of "networking" wasn't even considered.

However the #1 security risk with computers isn't "code" or "Programs" or Hackers or whatever; the BIGGEST problem is Social Engineering, of which there is no fix other than "Stupid should hurt".

When a web dialog box can mimic a system dialog box saying "Your Computer is Infected CLICK HERE to fix it", which downloads and installs Antivirus 2010 crapware, the problem isn't Firefox, Windows or anything any programmer can fix. PEBAC, PICNIC and 1D10T errors aren't fixable by programmers.

And if you had to fix these problems you'd realize that Hackers and such are spending more time on social engineering attacks to get their viruses, trojans, and other malware onto computers than traditional methods.

Re:Won't work (1)

jmorris42 (1458) | about 4 years ago | (#31765376)

> When a web dialog box can mimic a system dialog box saying "Your Computer is Infected CLICK HERE to fix it",
> which downloads and installs Antivirus 2010 crapware, the problem isn't Firefox, Windows or anything any
> programmer can fix. PEBAC, PICNIC and 1D10T errors aren't fixable by programmers.

Yes it is. Firefox should never set the executable bit on a download. (On Windows I guess it could neuter executables with an extra extension?) I don't care how 'convienient' it might be, just don't allow it. Second, a properly configured Linux machine isn't subject to that sort of attack because we use signed packages. If the scammer can get a user to click past enough dialog boxes to install a bogus repo or accept an unsigned package there really isn't much to do to help that user. None of those protections exist for Windows or Mac users which is why they can get 0wned with one bad click. We get 90+% of the security of Apple's closed i* DRM with 0% of the evil.

> "Stupid should hurt"

I agree in principle but submit that if almost two decades of Windows has failed to hurt enough to inspire change it is hard to imagine what would.

Re:Won't work (1)

Lunix Nutcase (1092239) | about 4 years ago | (#31766086)

Second, a properly configured Linux machine isn't subject to that sort of attack because we use signed packages.

Oh really? From here [slashdot.org] :

Pwnie for Mass 0wnage

Awarded to the person who discovered the bug that resulted in the most widespread exploitation or affected the most users. Also known as ‘Pwnie for Breaking the Internet.’

Red Hat Networks Backdoored OpenSSH Packages (CVE-2008-3844)
Credit: unknown

Shortly after Black Hat and Defcon last year, Red Hat noticed that not only had someone backdoored the OpenSSH packages that some of their mirrors were distributing, but managed to sign the packages with Red Hat's own private key. Instead of revoking the key and releasing all new packages, they instead just updated the backdoored packages with clean copies, still signed by the same key, and released a shell script to scan for the MD5 checksums of the affected packages. What makes this eligible for the "mass0wnage" award is that nobody is quite sure how many systems were compromised or what other keys and packages the attackers were able to access. With very little public information available, the real casuality was the public's trust in the integrity of Red Hat's packages.

I suggest you make sure to read the bolded part a few times.

Re:Won't work (1)

jmorris42 (1458) | about 4 years ago | (#31766524)

You might want to read the details a bit more than the overly sensationalized version at pwnie-awards. None of the bogus packages have been seen in the wild. That incident was about as close of a near miss as you can get without a kaboom! but there was in fact no kaboom!. Will something like it happen eventually? Probably. Lt. Commander Susan Ivanova said it best, "Sooner or later, BOOM!" But it doesn't happen on a daily basis.

Re:Won't work (0)

Anonymous Coward | about 4 years ago | (#31767592)

BWA HAHAHAHA! That's the best you can come up with, troll? One distro got their repository owned once. Not a single tampered package was found in the wild. Good there are literally hundreds of different versions of Linux thus virtually assuring that an exploit like this will only effect a very small number of the total systems out there. Contrast that to the monoculture that is Windows. Conficker, blaster, code red. the list goes on and on.

Now crawl back under your bridge, little troll.

Re:Won't work (1)

tepples (727027) | about 4 years ago | (#31767842)

Second, a properly configured Linux machine isn't subject to that sort of attack because we use signed packages.

Signed by whom? Allow self-signed packages and malware authors can self-sign. Reject self-signed packages and you can't run anything that hasn't been packaged for your distro, such as new software written by a user of a different distro or proprietary commercial software.

Re:Won't work (0)

Anonymous Coward | about 4 years ago | (#31764844)

Your comment is an example of failing to understand the solution.

Re:Won't work (1)

Red Flayer (890720) | about 4 years ago | (#31764904)

And this one adds the fact it doesn't even try to secure the apps, it tries to stop misbehaving apps (like SELinux) from accessing things it shouldn't

Well, that's the point. If an app is sandboxed, it doesn't matter if that app is insecure... your OS won't get hosed by that app.

If history shows anything, giving an attacker any access to run code locally gives them all they need to leverage it into root eventually.

This is an implementation issue, not a theoretical issue. If the virtualized locations are truly sandboxed, it's not an issue at all. I think what history shows is that attempts to stratify permissions don't work well, because there are too many workarounds in the interest of operability. It's these workarounds that get abused (aside from the occasional chunk of shoddy code design that accidentally permits privilege escalation).

But I still don't think this is completely relevant to the concept of sandboxing via virtualization. Who cares if they escalate to root in one of the virtual machines? Nothing of consequence is done in that machine, so no important data is vulnerable. Do all your banking, for example, from a different virtual machine, and aside from the usual user-stupidity issues (hello, at phishing site! I'll gladly give you my login details and passwords!), the banking information is secure from any kind of exploits that may exist in a different virtual machine.

Maybe I'm completely mistaken here... if so, please help me understand... but how would root access in one of the virtual machines allow the attacker to have root access on any of the other machines?

*Please note I'm referring to virtual machines when likely that isn't the right term. I'm not an expert on this, and I'm hoping I learn something from this exchange.

Re:Won't work (1)

jmorris42 (1458) | about 4 years ago | (#31765702)

> Well, that's the point. If an app is sandboxed, it doesn't matter if that app is insecure... your OS won't get hosed by that app.

Except history tells us it never works that way in the real world. Java? Nope. BSD Jails? Nope. Virtualization? Nope. Containers? Nope. All promised to build a perfect isolated subsystem that apps couldn't escape from. All have had exploits. It helps, but it can also harm by leading to a false sense of security that causes people to do things they never would have done without that belief that it was safe. It was faith in sandboxes that made the idea of everything becoming a carrier of executable content possible. We would have never accepted the notion of every random webpage we visit being chockablock full of potentially evil executable content (Javascript, Flash, JAVA, .NET, PDF) served up by random ad networks that loads and runs on our side of the link if it weren't for the false promise that it could be safely isolated.

> This is an implementation issue, not a theoretical issue.

Here in reality we only deal with implementations, not theory. EVERY sandboxing scheme attempted to date has failed, some more messy than others, some more publicly than others, but ALL have failed. A 100% failure rate after dozens of independent implementaions says either the idea is flawed or requires tools/skills we do not currently possess to implement. Either way, trusting sandboxing is asking for trouble. I remember when an email that could infect your computer was an April's Fool joke. Microsoft and Netscape Communications made the joke reality.

Re:Won't work (1)

Red Flayer (890720) | about 4 years ago | (#31770170)

Thanks for the fairly level-headed response, they seem to be few and far between on slashdot nowadays.

EVERY sandboxing scheme attempted to date has failed, some more messy than others, some more publicly than others, but ALL have failed.

Hmmm... I wasn't aware that virtualization was a security failure, and that every instance of VM implementation failed to maintain security of the host OS. Do you know where I might get some good reading on the subject?

Re:Won't work (1)

Lord Ender (156273) | about 4 years ago | (#31764962)

Your post is an example of failing to understand information security.

Security practitioners have accepted the fact that it is infeasible to ever expect that all applications be free of security holes. It is also unwise to insist on the fantastically-higher expense of using dedicated hardware rather than virtualization for most applications. Because of this, we have adopted a strategy of "defense in depth" whereby we layer multiple countermeasures to reduce the probability of successful exploitation.

A security control which stops 99% of malware from taking hold, yet allows 1% to do so, is not considered "flawed," it is considered very successful. Layer such things together and the chance of a given attack working becomes 1% of 1% of 1% and so-on.

There will always be bugs. Accept reality and roll with the punches.

Re:Won't work (1)

moderatorrater (1095745) | about 4 years ago | (#31765844)

This adds another layer of security on top of Linux which makes it more secure than it would otherwise be.

If history shows anything, giving an attacker any access to run code locally gives them all they need to leverage it into root eventually.

Perhaps, but this is another layer they'll have to get through before they can root the machine. So now they have to exploit the program, gain access to the VM that it's running on, and then jump from that VM to the host OS. It's not a perfect solution and it's almost certainly not impenetrable, but that doesn't keep it from being a useful tool.

Singularity (2, Interesting)

organgtool (966989) | about 4 years ago | (#31764564)

This approach seems similar to the one taken by Microsoft in their Singularity OS [wikipedia.org] . I wonder if the issue of context switching will become an issue and if it does, how will it be addressed.

Re:Singularity (0)

Anonymous Coward | about 4 years ago | (#31764714)

... the issue of context switching will become an issue

Well, is an issue and issue? That is the question, it seems.

Hardly a victory... (4, Interesting)

Jahava (946858) | about 4 years ago | (#31764634)

Let me begin by saying that this sounds like a truly interesting approach to security. Virtualization technology defines very clear hardware-enforced boundaries between software domains. In the standard case, those domains contain different operating systems, each of which are provided privilege level-based sub-domains. In this particular case, each domain is dedicated to running sets of user-space applications, and the hardware boundary is used for userspace isolation, as opposed to virtual machine OS isolation.

So my "home" domain is infected, but my intellectual property is in my "work" domain. The virtualization boundary means that a virus can get Ring 0 access and still not be able to touch those IP files. Hurray ... except wait. There must be an interface between the "home" domain and the hypervisor, else things like copy-and-paste and hardware resource arbitration can't work. Here's what some infection paths would look like:

  • Standard XP Install: (Firefox)
  • Standard Vista / Linux Install: Firefox -> (Kernel)
  • Qubes Install: Firefox -> Home Kernel -> (Hypervisor)

Maybe the paths can be locked down better, but a vulnerability is a vulnerability. It's clearly harder for a virus to get full control, but that's just throwing another obstacle in the way. If one is bad, and two is better, maybe three is even better, but nothing's perfect. Why is the domain-to-hypervisor path considered any more secure than the userspace-to-kernel path? If it's not, you're just adding more complexity, which could mean more potential for vulnerabilities! If you're locking down privilege boundaries, kernels like FreeBSD (jails) and even userspace execution environments like Java (JVM) have been working on that for years.

It's cool, but I doubt it will be game-changing.

Re:Hardly a victory... (0)

Anonymous Coward | about 4 years ago | (#31764854)

I think the reason it exists is because while the other solutions are less complex and undoubtedly better the problem with the other solutions is they are conceptually harder to understand and implement by the end users. Vitalization on the other hand is easier to understand, test, and implement. You know that it is working and can understand how it works easily. People forget the human component and the laziness components in computer security. It is like password security. A password that is long is better than a password that is short- but if you make a user change it every 30 days then chances are it is on a sticky note posted to the monitor. What good is that? No security at all.

Re:Hardly a victory... (1)

moderatorrater (1095745) | about 4 years ago | (#31765954)

Why is the domain-to-hypervisor path considered any more secure than the userspace-to-kernel path? If it's not, you're just adding more complexity, which could mean more potential for vulnerabilities!

It's considered more secure in the same way that it's more secure to have a firewall instead of just trying to secure the applications. As long as they can secure the interfaces between the host OS and the VMs, they have security. If they don't secure that interface, then they're left with no more vulnerabilities than they had before.

Re:Hardly a victory... (1, Insightful)

Anonymous Coward | about 4 years ago | (#31767732)

The hypervisor is simpler and therefore easier to audit for issues (or even prove correct).

With KVM in the kernel (1)

h4rr4r (612664) | about 4 years ago | (#31764782)

Since KVM is in the mainline and redhat is now supporting that 100% this seems like a bad time to start anything on Xen.

To which I say good riddance, virtualization is just another app and kvm gets this right.

Re:With KVM in the kernel (1)

KagatoLNX (141673) | about 4 years ago | (#31764966)

Paravirtualization makes this the same thing with Xen. The difference is that Xen is the OS and Linux is the app.

You may not like Xen as an OS, but it does have some very nice qualities that are hard to deliver on Linux alone.

Re:With KVM in the kernel (1)

diegocg (1680514) | about 4 years ago | (#31765004)

Suse will also support KVM in SLES11 SP1 and expects [slideshare.net] that long term it will "become equivalent to Xen". Ubuntu and Fedora also support KVM. Xen doesn't care about what distros do (they don't care about getting all their code merged in mainline either), they seem to think that they can ignore what mainstream OSes do, just like VMWare. I suppose they will die some day, I'm not using third party software if I can get the same funcionality with the OS.

Re:With KVM in the kernel (0)

Anonymous Coward | about 4 years ago | (#31765242)

Xen kernel developers DO care about mainline Linux!

After the 'original' Xenlinux patches for 2.6.18 were rejected from upstream Linux (they were considered too intrusive because they touched too much generic x86 non-xen code), there was a lot of discussions about how to get the Xen code merged to Linux. VMware at that time wanted to have a common framework for running paravirtualized Linux guests on hypervisors, including their own, so then common Paravirt_ops (pvops) framework development started. It took some time, and finally pvops framework got included into upstream Linux. Xen pvops domU (guest) support was merged to upstream Linux in 2.6.24. Xen pvops domU support has been improved after that in every Linux release, and the latest releases actually have a pretty stable support for Xen PV guests. For example Fedora distros provide Xen PV guest/domU support using the mainline Linux pvops feature.

Xen kernel developers have been busy re-writing and cleaning up the the dom0 support using the pvops framework. That work is now getting final/ready, and the Xen pvops dom0 patches will be sent to upstream Linux shortly.

Xen 4.0.0 released today actually uses this pvops dom0 kernel as a default - based on Linux 2.6.31 or 2.6.32 (the long-term maintained kernel). There's pvops dom0 also for 2.6.33 and 2.6.34.

See these wiki pages for more information: http://wiki.xensource.com/xenwiki/XenParavirtOps and http://wiki.xensource.com/xenwiki/XenDom0Kernels

Re:With KVM in the kernel (1)

styrotech (136124) | about 4 years ago | (#31768408)

Since KVM is in the mainline and redhat is now supporting that 100% this seems like a bad time to start anything on Xen.

To which I say good riddance, virtualization is just another app and kvm gets this right.

They explain that in their FAQ and architecture PDF. One of the reason is that KVM doesn't allow the moving of (eg) networking and I/O drivers into separate unprivileged guests.

Remus (2, Informative)

TheRaven64 (641858) | about 4 years ago | (#31764800)

The Remus stuff in Xen is very cool. A couple of days ago there were some posts about HP's NonStop stuff as an example of something you could do with Itanium but not with commodity x86 crap. Remus means that you can. It builds on top of the live migration in Xen to keep two virtual machines exactly in sync.

Computers are deterministic, so in theory you ought to be able to just start two VMs at the same time, give them the same input, and then see no difference between their state later on. It turns out that there are a few issues with this. The most obvious is ensuring that they really do get the same input. This means that they must handle the same interrupts, get the same packets from the network, and so on. Anything that is used as a source of entropy (e.g. the CPU's time stamp counter, jitter on interrupts, and so on) must be mirrored between the two VMs exactly. This was already possible with Marathon Technology's proprietary hypervisor on x86, but is now possible with Xen.

As with the live migration, you can kill one of the VMs (and the physical machine it's running on) and not even drop network connections. This leads to some very shiny demos.

Oh, and I should probably end this post with a gratuitous plug for my Xen internals book [amazon.co.uk]

Functionality-based Application Confinement (1)

z.cliffe.schreuders (1698064) | about 4 years ago | (#31764878)

Looks like a nice approach to program isolation. A system which was in some ways similar to Qubes was developed for Windows known as WindowBox. My research takes another approach, program restriction. Systems such as SELinux and AppArmor allow precise policies to define the types of actions and resources which are made available to each application. However, the finer the granularity of privilege assigned, the more detailed and complex policies become. The system I created for my PhD research, FBAC-LSM, restricts applications based on the functionalities they perform. Eg Web Browser, Email Client, Image Viewer etc. Then the programs can not act beyond the things they need to do, and the damage which can be caused by vulnerabilities and malware is severely limited. Basing policy on functionalities means that policy is easier to construct (since it is based on high-level abstrations) than other systems based on fine grained restrictions. The advantages compared to isolation systems such as Qubes is that normal work flows (where a user creates, views, edits and shares the same files with many different apps) can be used while each application is restricted to the privileges it needs. FBAC-LSM is in development and is available as free open source software: http://schreuders.org/FBAC-LSM [schreuders.org]

Re:Functionality-based Application Confinement (1)

oakgrove (845019) | about 4 years ago | (#31767926)

Systems such as SELinux and AppArmor allow precise policies to define the types of actions and resources which are made available to each application. However, the finer the granularity of privilege assigned, the more detailed and complex policies become.

I don't see how this is really a problem though, particularly with Apparmor. When you want to create a profile, you just start the app in profile mode, run it through its usual paces then hit save. The profile runs automatically when apparmor is restarted. That seems like an almost point and shoot level of ease to enabling very robust security that anybody that's comfortable with the command line can do. Suse even has a GUI for it. With Ubuntu, a simple apt-get install apparmor-profiles installs ready to go settings for several commonly used programs including Firefox.

Sorry for the tortured nature of this post. I banged it out on my phone.

Xen and EC2 (1)

Aggrajag (716041) | about 4 years ago | (#31765180)

I've had a lot of problems with EC2 that upgrading to at least 3.3 would
fix. I just hope Amazon would start thinking about upgrading EC2 or migrate
it from Xen as they seem to have gotten stuck with 3.1 and I've seen
nothing that indicates there's going to be an upgrade.

Re:Xen and EC2 (0)

Anonymous Coward | about 4 years ago | (#31765436)

We're switching to kvm. Development has ceased on the old xen version.

Interesting but the problem is the end user. (3, Insightful)

spacepimp (664856) | about 4 years ago | (#31765460)

The real issue still resides. The end users (PEBKAC). Take my father for example. Sure you have a Qube for banking and Qube for work and a Qube for home use. Now the home use one where he does his "Magic" or whatever he does to infect/taint/destroy any PC I put in front of the man, gets entirely infected Spywared/Malwared/chuncked and muddled. So he can't get to his phishing emails about how to make millions in the internets and by getting the diamonds out of Namibia. He cant do that from the infected Qube. He'll then go up the chain to his private banking Qube to install his makingmillions.exe so it will work again. Long story short.... Some people cannot help themselves but by being victims. I'd give the man Linux but he always finds a reason it's keeping him from being successful... I know by keeping these Qubes sandboxed it will be harder for it to get the taint, but they will find a new way to find my father.

Re:Interesting but the problem is the end user. (0)

Anonymous Coward | about 4 years ago | (#31766610)

His home Qube won't get unusuably gunked up if it resets itself every time he turns it off.

Properly done virtualization is the equivalent of wiping your computer and restoring from a clean backup every time you start a program. Except it doesn't take that any time.

OpenSolaris Immutable Service Containers (0)

Anonymous Coward | about 4 years ago | (#31765518)

Nothing new. This is basically the Immutable Service Containers architectural pattern. i.e. A Secure Execution Container for each service or application, everything denied by default, open up only whats needed. We do this with opensolaris today.

Joanna Rutkowska actually a then-man (0)

Anonymous Coward | about 4 years ago | (#31765630)

just some trivia information, it is a _guy_ who had his gender changed some time ago. actualy he/she looks quite cute :) some more info here: http://www.rutkowska.yoyo.pl/ [rutkowska.yoyo.pl] (to get rid of banner there is arrow in top right corner)

FreeBSD 8.0 w/ ZFS + Jails + VirtualBox (0)

Anonymous Coward | about 4 years ago | (#31765882)

I have been experimenting with virtualization for a couple years now. The best solution I have found is to run a FreeBSD 8.0 host. All the desired BSD compatible services run within a jail. Services that require Windows or Linux run in Headless VirtualBox instances.

Oh good, more useless abstraction ... (1)

BitZtream (692029) | about 4 years ago | (#31766126)

Great ... another 'OS' that has its own new set of problems ... plus before its actually useful in the real world you'll have to come up with ways to give it all the speed power and flexibility of the OSes we use now, which it doesn't have ... by the time you add it back you'll end up right back where you are now.

Apps and data is useless on an island. When you're on an island you're safe from attackers.

To actually do something useful however, data needs to move on and off the island, at which point, you're right back to square one more or less. The only difference is now you've got an OS ... that loads another OS ... that loads (in a virtual process space) an application.

I realize that abstraction can be a good thing, but considering that all thats being done here is literally adding another layer of abstraction with no additional benefits ... it would seem the right thing to do was just make a few tiny mods to the existing OSes rather than create a whole new one which is basically a copy of existing ones with a ever so slightly different core.

Lets face is, 'hardware virtualization support' is nothing more than a newer/slightly different implementation of what we've already had in processors for years. We've already had 'hardware virtualization' for years, this is just one more layer on top of the already existing support that does the same thing.

Why the hell would you be so retarded to assume its going to be different now, just because we've added more abstraction and code, its going to be safer? My experiences show the exact inverse to be true when you add code and complexity :/

Stop calling them 'hypervisors' its a freaking OS, except unlike what we normally think of as an OS with a kernel and a software defined API to get something done, you have an OS with a kernel and a software that emulates a hardware interface so it can run software with a kernel and a software defined API to get something done. Perhaps just fix the software defined API to isolate properly cause by the time you make your hypervisor OS useful like the traditional OS, its not going to be so isolated.

Re:Oh good, more useless abstraction ... (1)

david_thornley (598059) | about 4 years ago | (#31767474)

What this virtualization will do, if it works, is prevent applications from making any unauthorized changes to the OS. On my box at home, if I run a program, I'm taking the chance that it can't get root access somehow and mess with my system files. If it's running in a virtual machine, and can't get out of it, it can trash the virtual machine all it likes and I'll just release the VM when I've run the program and spin up a new one to run the next program.

It isn't perfect, of course, even if it can prevent the app from affecting anything outside the VM sandbox. I do make changes to my own system when I like, after all, and malware can piggyback on that. However, in today's environment, being able to run an application and not allow it to change the system unless I let it is valuable.

IBM's MVS followed these same ideas (1)

TheLoneGundam (615596) | about 4 years ago | (#31766698)

IBM's virtual storage OSes (OS/VS1 through the current z/OS all share a lot of common componentry) in conjunction with hardware architecture have similar ideas: Each system service runs in its own address space. You have to be an authorized system service to communicate between address spaces, if you try otherwise your program fails. If you try to store outside of your virtual address range, your program fails. To become an authorized program requires permission in one way or another from system administrators, you can create one marked "authorized" but if it isn't loaded into storage from an authorized file (controlled by admins), your program will not execute in an authorized state. In addition, each hardware page has a storage protect key. Application programs run in key 8, important parts of other system services run in different keys. "The System" runs in key 0. To change keys requires that you be an authorized program. The astute have already figured where this is going: try to use storage that's not in your key - your program fails. The architecture not only protects the system from applications that behave poorly, but the storage key mechanism can protect authorized parts of the system from clobbering other parts of the system. This setup has been providing increasing high, and increasing, levels of availability since the advent of System/360 in the 1960s.

Could VMs guard each other? (0)

Anonymous Coward | about 4 years ago | (#31767040)

If all those AppVMs are instances of the same system, couldn't they also check each other's integrity? Even if a virus manages to infect the hypervisor, it will be really hard to infect all AppVMs at once.

I guess they don't watch the Mythbusters (1)

ahazelwood (210387) | about 4 years ago | (#31767380)

From the article:

  "These mechanisms finally move the bull (untrusted data) from the china shop (your data) to the outside where it belongs (a sandbox)."

The Mythbusters showed that bulls will try to miss china if it's at all possible. http://mythbustersresults.com/episode85

Dirty bedsheets please! (1)

t0p (1154575) | about 4 years ago | (#31768162)

"Qubes" is a frikken terrible name. Something like "stained bedsheets" would be much better (Linen-XXX, geddit?). But I suppose a sense of humour is too much to expect nowadays.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...