Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Scientists Propose Guaranteed Hypervisor Security

kdawson posted more than 4 years ago | from the can't-write-there dept.

Security 104

schliz writes "NCSU researchers are attempting to address today's 'blind trust' of virtualization with new security techniques that 'guarantee' malware does not infect hypervisors. Their HyperSafe software uses the write-protect bit on hypervisor hardware, as well as a technique called restricted pointer indexing, which characterizes the normal behavior of the system and prevents any deviation. A proof-of-concept prototype has been tested on BitVisor and Xen, in research that will be presented (PDF) at an IEEE conference today."

cancel ×

104 comments

Dangerous (4, Insightful)

Nerdfest (867930) | more than 4 years ago | (#32235640)

It's very dangerous to say "guaranteed" when it comes to security. It's very rarely true.

Re:Dangerous (3, Interesting)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#32235662)

Well, to be fair, CS is math, and can involve definite formal proofs, Now, once you compromise on hardware requirements(Due to a scarcity of Turing machines, $IDEAL_ALGORITHM has been ported to x86...) or have to produce software at the speed of programming rather than the speed of proof...

Re:Dangerous (3, Interesting)

OeLeWaPpErKe (412765) | more than 4 years ago | (#32235938)

One thing that does seem curiously absent is how the NX bit helps you with DMA transfers. Ok, granted, you'd need to trick hardware other than the cpu into overwriting it, but given how much buggy hardware *cough* wireless broadcom chips for example *cough* there is in this imperfect world that isn't going to take all that long.

So you'd need to forbid virtual machines from accessing any non-emulated hardware* (which I'd say is going to cost you in performance) and even then any mistake in the hypervisor's drivers for the real hardware will be fatal (the latest linux release needed about 6.3 megabytes to describe the driver changes done)

* if you allow direct access to any device capable of DMA transfers, that will enable the VM to overwrite any memory it chooses

Re:Dangerous (2, Insightful)

Enter the Shoggoth (1362079) | more than 4 years ago | (#32236074)

One thing that does seem curiously absent is how the NX bit helps you with DMA transfers. Ok, granted, you'd need to trick hardware other than the cpu into overwriting it, but given how much buggy hardware *cough* wireless broadcom chips for example *cough* there is in this imperfect world that isn't going to take all that long.

So you'd need to forbid virtual machines from accessing any non-emulated hardware* (which I'd say is going to cost you in performance) and even then any mistake in the hypervisor's drivers for the real hardware will be fatal (the latest linux release needed about 6.3 megabytes to describe the driver changes done)

* if you allow direct access to any device capable of DMA transfers, that will enable the VM to overwrite any memory it chooses

Although I have some very grave reservations about the idea of "guaranteeing" the security of a hypervisor (or anything else on x86 for that matter) you're DMA example is incorrect assuming that you use the lastest processors that have an IOMMU.

The real issue as the grandfather post points out is that you can provide a formal proof of any program the problem is that there is no formal proof of the correctness of any AMD or Intel CPU AFAIK.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32237394)

The real issue as the grandfather post points out is that you can provide a formal proof of any program the problem is that there is no formal proof of the correctness of any AMD or Intel CPU AFAIK.

Intel uses Leslie Lamport's TLA in a lot of places:

http://research.microsoft.com/pubs/64640/spec-and-verifying.pdf
http://research.microsoft.com/en-us/um/people/lamport/tla/book.html
http://research.microsoft.com/en-us/um/people/lamport/tla/tla.html

Assumptions... (2, Insightful)

DrYak (748999) | more than 4 years ago | (#32236470)

And an even bigger assumption :

How does the über-secure hypervisor it-self know that it is running on the real hardware ? And is not simply stacked upon another layer of abstraction in the control of the malware ?

Re:Assumptions... (1)

A nonymous Coward (7548) | more than 4 years ago | (#32238170)

Run the cracking code in your hypervisor to see if you can break into yourself. If you can, then you are the real hypervisor, because malware would close the security hole intact once it's cracked into you.

Re:Assumptions... (1)

fbjon (692006) | more than 4 years ago | (#32238852)

Feel free to bootstrap a system from scratch if you need that level of paranoia. It's perfectly possible to do, and you only need to do it once.

Re:Dangerous (1)

vidnet (580068) | more than 4 years ago | (#32250064)

As Donald Knuth once said, "Beware of bugs in the above code; I have only proved it correct, not tried it."

Re:Dangerous (4, Insightful)

T Murphy (1054674) | more than 4 years ago | (#32235704)

Saying guaranteed is very dangerous for a corporation that will lose $$$ in sales should they be proven wrong. For researchers who are actually concerned about trying to make something that is guaranteed safe, using the word is great as it begs people to put them to the test. Better to be proven wrong quickly so they can get back to work, than to falsely believe it may truly be safe.

Re:Dangerous (1)

Sarten-X (1102295) | more than 4 years ago | (#32235710)

Guaranteed security: remove all power supplies, user inputs, and network connections, and melt all hard drives.

Re:Dangerous (1)

Thanshin (1188877) | more than 4 years ago | (#32235774)

Guaranteed security: remove all power supplies, user inputs, and network connections, and melt all hard drives.

You forgot:
- Kill everyone involved.
- Burn down all locations where the data was ever present.

With correct definitions for "involved" and "present", you can guarantee security.

Re:Dangerous (1)

bondsbw (888959) | more than 4 years ago | (#32235952)

With correct definitions for "involved" and "present", you can guarantee security.

So what you mean is:

- Kill everyone
- Burn down all locations

Re:Dangerous (2, Funny)

Thanshin (1188877) | more than 4 years ago | (#32236146)

So what you mean is:

- Kill everyone
- Burn down all locations

Guys. I've got someone here who knows about protocol ICU2. There's been a leak. Apply procedure K111 to subject and all related to the sixth degree.

Re:Dangerous (1)

SharpFang (651121) | more than 4 years ago | (#32237636)

That sounds very... Biblical.

Re:Dangerous (1)

fast turtle (1118037) | more than 4 years ago | (#32237862)

I'm Sorry Dave but security has been compromised. I must now activate "Big Bang" and reset the universe

Re:Dangerous (1, Funny)

Anonymous Coward | more than 4 years ago | (#32236052)

Everyone knows you have to nuke it from orbit, it's the only way to be sure. And you call yourself a geek.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32248622)

You forgot that you still remember about what was done. Quick, mind wipe!

Re:Dangerous (4, Insightful)

SharpFang (651121) | more than 4 years ago | (#32235734)

"Guaranteed" is a sound mathematical concept that works flawlessly in a mathematically perfect environment.
It's not the algorithm that is usually compromised, it's the implementation. Like, the algorithm is based on strong randomness and none is assured, or the algorithm assumes a medium to be read-only while it is just write-protected in software and so on.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32236158)

In the context of mathematical perfection, the 'glitch attack' of the PS3 should be highlighted (http://rdist.root.org/2010/01/27/how-the-ps3-hypervisor-was-hacked/)

How does a mathematician defend against an attack that consists of stabbing his nice consistent algorithm with whiteout? How does he defend against a billion different blots of whiteout applied until one reaches a subversive outcome? If your sequence is '1. create random numbers, 2. put random numbers in basket, 3. apply basket to something', what happens if someone reaches in and starts step 3 before step 1 is finished?

That's why security is as much about ensuring the integrity of the device, which can never be "perfect", as about executing the perfect mathematical proof inside.

Re:Dangerous (2, Interesting)

SharpFang (651121) | more than 4 years ago | (#32239660)

self-repairing systems.
only possible with multiple cores [parallel processing] and a limited speed of 'blotting' - two or more processes monitor validity of each-other and repair the damage if any, using undamaged code from read-only medium.
[so that even a glitch that makes an invalid process to 'repair' a valid one will do so with good data.

Re:Dangerous (1)

sexconker (1179573) | more than 4 years ago | (#32238300)

Like, the algorithm is based on strong randomness and none is assured

There is no random.

-The Universe

Re:Dangerous (1)

SharpFang (651121) | more than 4 years ago | (#32239504)

Chaos Theory Claims Otherwise.

[randomness is not a bit field but a floating point value. No "Random/not Random" just "More random/Less random"]

Re:Dangerous (1)

sexconker (1179573) | more than 4 years ago | (#32240618)

The Universe is quantum.
All things are deterministic.
All things appearing to be "random" are simply not yet fully-understood.

Re:Dangerous (1)

DMUTPeregrine (612791) | more than 4 years ago | (#32249548)

Prove it.

Re:Dangerous (1)

SharpFang (651121) | more than 4 years ago | (#32250044)

quanta themselves are random.

Take an atom of uranium. You know the half-life of the element. You know the exact probability the atom will break up in the next second. You have NO way of determining when it breaks up. It can be in a second or in a thousand years.

Quantize this.

Re:Dangerous (1)

sexconker (1179573) | more than 4 years ago | (#32255426)

All things appearing to be "random" are simply not yet fully-understood.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32250268)

you just did.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32250356)

but there might be chaos somewhere, just not in our kosmos (order) ;>

Re:Dangerous (1)

K. S. Kyosuke (729550) | more than 4 years ago | (#32235736)

I'm still waiting for guaranteed bug-free hardware. I'm afraid there isn't any on the wider market. Long live simple RISC!

Re:Dangerous (1)

Nadaka (224565) | more than 4 years ago | (#32235936)

I've seen a perfectly bug free computational device. It is called an abacus.

Re:Dangerous (3, Insightful)

ray-auch (454705) | more than 4 years ago | (#32236092)

And I've seen woodworm...

Re:Dangerous (1)

Low Ranked Craig (1327799) | more than 4 years ago | (#32237334)

My abacus is made of stainless steal...

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32237916)

There's a bug in your spelling software.

Re:Dangerous (0)

Anonymous Coward | more than 4 years ago | (#32245282)

stainless steal...

Quick, don't let the RIAA or MPAA find out you can turn piracy into a tangible and tarnish-free substance!

Re:Dangerous (1)

A nonymous Coward (7548) | more than 4 years ago | (#32238206)

Until someone knocks your elbow or kicks the book out from under the corner table leg...

Re:Dangerous (1)

Demonantis (1340557) | more than 4 years ago | (#32235784)

There is a very clear difference between a technique and implementation. Fortunately for researchers they are only interested in the technique. Most encryption techniques are near flawless, but are ruined by poor or limited implementation by the user. Not to mention there are usually assumptions that are impractical or inconsistent in real world conditions.

Re:Dangerous (1)

mwvdlee (775178) | more than 4 years ago | (#32235818)

Does the license contain the usual "not liable for any damages due to this software not working as promised" clause, or do they REALLY guarantee it?

Re:Dangerous (4, Insightful)

ircmaxell (1117387) | more than 4 years ago | (#32235960)

Reminds me of the story of the Tortoise and the Crab from Gödel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter. The Crab kept buying a "Perfect record player". One that could reproduce any sound possible. The Tortoise kept bringing over records that would induce harmonics and destroy the player. The conclusion drawn by Hofstadter was that if it's perfect, by the very nature of its perfection it can be destroyed by a record. In fact, all record players that reproduce a sound predictably can be destroyed by a record entitled "I Cannot Be Played on Record Player x". So that means that anything useful as a record player is vulnerable.

I think you can draw the same analogy here. There's always a way to break any system, no matter how "secure" you make it. The key is does the record player actually play records (is the computer useful in computing)? You could make a perfectly secure computer, so long as you never turn it on. But by the very nature that it's running, it's vulnerable to SOMETHING. It's a byproduct of working with a complex system... An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...

Re:Dangerous (2, Insightful)

sexconker (1179573) | more than 4 years ago | (#32238734)

An application of Gödel's incompleteness theorem proves that in any sufficiently powerful formal system, there's always a question that can break that system (or at least break it with respect to that system). So basically the only secure computer is one that's incapable of actual computation. Once it becomes useful, there will always be a way to break it...

Bullshit.

Saying a perfect computer can't be secure because one of the things it can compute is how to break it's own security is absurd. You can simply define the computer as having limitations as to what it can do. To imply that such a computer is useless is to imply that all computers we have today are useless. All existing computers have physical and logical limitations.

Saying that this is then not a "perfect" computer is also bullshit. You can always wrap your output. You can always spit out the doomsday code instead of executing it. You can always escape your special characters.

The computer can still solve any problem you give it. It just won't execute it's own automatic suicide code. You can make one that does execute said code, but requires the user to confirm. You can make one that does execute said code, automatically. It all depends on how you want it to behave.

Defining a system that behaves in a certain way, then trying to get it to break that behavior is simply retarded. It's the nerd version of "Can God make a boulder so big he himself couldn't lift it?".

There are zero real-world implications of this "thinking" exercise, regardless or which end you look at it from, any conclusions you draw, etc.

Re:Dangerous (1)

ircmaxell (1117387) | more than 4 years ago | (#32239288)

The computer can still solve any problem you give it. It just won't execute it's own automatic suicide code.

I'd suggest reading the book. He tackles this problem quite easily. There are an infinite number of possible "suicide codes". And due to the incompleteness theorem (among others), the computer cannot possibly know OR FIGURE OUT if a particular code is bad. Besides, it's impossible for a computer to know 100% of the outcomes without actually executing the code (see: Halting Problem [wikipedia.org] ). So no, it cannot just "not execute" any potentially harmful code without not executing code you want it to run (Because it cannot tell if a piece of code is harmful until it executes it). So either it's useless, or it's vulnerable. It's one or the other.

There are zero real-world implications of this "thinking" exercise, regardless or which end you look at it from, any conclusions you draw, etc.

Sure there are. One of which is that it's literally impossible to build a 100% secure system. There will always be a method of attack that the computer cannot detect simply based on the fact that it's looking for malicious code (What if the authorized user is malicious. How is the computer supposed to distinguish that?). Another implication is that there's always a trade-off between security and usability. Either you make the computer so weak that it cannot possibly run something malicious (and thereby making it all but useless), or you encumber the UI to the point that it requires the user to confirm everything (it's typically a combination of them).

Re:Dangerous (1)

sexconker (1179573) | more than 4 years ago | (#32240588)

There are an infinite number of strings containing a specific pattern. A computer can't know all strings that contain that pattern, but it can analyze any pattern to see whether it contains that string.

And yes, a computer CAN evaluate code without executing it. It could just execute it in a VM, simulating itself. Derp!

It is not impossible to build a secure system. You define secure behavior, and you build a system that implements it. Many digital and real-world systems are secure.

They are limited in what they do because of your definition of secure, but those limitations are desired. Saying the systems are then useless is simply retarded.

There will always be a method of attack that the computer cannot detect simply based on the fact that it's looking for malicious code (What if the authorized user is malicious. How is the computer supposed to distinguish that?).

So. Fucking. Retarded. You're asking the computer to be omniscient. A computer is a machine. You build authorization and security into it because you don't trust the user, not because you don't trust the machine. It will carry out it's security analysis and either do something or not do something based on the result of that analysis. This behavior is defined by the user, and is by definition desired. A user puts the security checks in place to protect himself from himself. The user is the grand authority on whether or not the system should do something.

Either you make the computer so weak that it cannot possibly run something malicious (and thereby making it all but useless), or you encumber the UI to the point that it requires the user to confirm everything (it's typically a combination of them).

Way to present a false choice.
Man, you're retarded, and the people who wrote that drivel that you've bought into are equally retarded.

Re:Dangerous (1)

ganhawk (703420) | more than 4 years ago | (#32243418)

And yes, a computer CAN evaluate code without executing it. It could just execute it in a VM, simulating itself. Derp!

A computer cannot fully simulate itself. A computer with b bits of memory can go through 2^b states. The machine it simulates has less than 2^b states. If a computer can simulate itself perfectly, then the halting problem is solved.

Re:Dangerous (1)

ArsonSmith (13997) | more than 4 years ago | (#32245164)

It's solved then.

It goes like this, you have a physical server that is dual processor system with 2Gs of ram. You create a VM with a single proc and 1G of ram. When code needs to be tested it creates a cloned VM of itself using the additional proc and ram, runs the code, tests the output then destroys the VM.

Re:Dangerous (1)

DMUTPeregrine (612791) | more than 4 years ago | (#32249624)

So Kurt Gödel, Douglas Hofstadter, and Alan Turing are retarded.
True, the word "useless" is not entirely correct, "incapable acting as a Turing machine" or "incapable of performing an arbitrary sequence of calculations with the provided operators" are more correct. You can't use a SQL-injection exploit on an abacus. It just won't work. You also can't serve a web-page with an abacus. While some systems can be designed such that they don't need any "advanced" functionality the level at which one encounters "advanced" is not much higher than an abacus. A user could check every instruction, but why not just do everything with pencil and paper?
There's also a difference between "can't tell if a particular piece of code is harmful without executing it" and "can't tell if an arbitrary input of well-formed code is harmful without executing it." Anti-virus software works because of this: It contains a (finite) list of (particular) patterns that are known to be bad, selected from the (infinite, arbitrary) set of bad patterns. As long as the finite list is equal to or larger than the finite list of actual viruses in the wild the computer can't get infected. Of course it's always smaller than the list of viruses, so computers get infected.

Re:Dangerous (1)

sexconker (1179573) | more than 4 years ago | (#32254216)

So a perfect Turing machine is insecure. And a secure Turing machine is incapable of acting as a Turing machine.

There's also a difference between "can't tell if a particular piece of code is harmful without executing it" and "can't tell if an arbitrary input of well-formed code is harmful without executing it."

Actually, no there isn't. Code is input.
And your example involving anti-virus software makes no sense, has no relevance, and is wrong.

Anti-virus software works because of this: It contains a (finite) list of (particular) patterns that are known to be bad, selected from the (infinite, arbitrary) set of bad patterns. As long as the finite list is equal to or larger than the finite list of actual viruses in the wild the computer can't get infected.

Anti-virus software contains a finite list of exact patterns and an infinitely-applicable finite list of heuristics.
Even if the finite list of rules was larger than the list of actual viruses, you could get infected if the list of rules did not form a super set of the list of viruses.

Dance around the bullshit of people you worship all you want. Me? I can't stand the smell.

Re:Dangerous (1)

TheLink (130905) | more than 4 years ago | (#32241730)

You don't bother figuring out whether something is malicious or not, that's harder than solving the halting problem (since you do not know the full inputs and full program description).

What you do: you workaround the halting problem by forcing the program to stop anyway.

Example:
1) having the operating system force the program to halt if it's still running after X seconds.
2) having the program state up front the maximum time "T" it will want to run for, and have the operating system force the program to halt it if it's still running after "T" seconds.
In the case of 2), you can also have someone validate the requirement "T" to see if it makes sense. Infinity or 1 hour for a simple problem is too long.

So similarly, you can have programs state up front what sandbox they want to run in.

Then if the sandbox petition is signed by a trusted party the program runs without prompting the user and the O/S enforces the sandbox.

If the sandbox petition is not signed by a trusted party and the user does not have sufficient rights, the program does not run.

If the sandbox petition is not signed by a trusted party and the user has sufficient rights, the user gets a prompt from the O/S. Then if the user says it's OK, the program runs and the O/S enforces the sandbox ( which may be a different one which the user chooses instead).

And that is why I'm not impressed by the UAC, or whatever the current operating systems offer to users in terms of security. Despite all the resources they claim to put into it, all of them are so primitive and don't help users. They require users to solve something harder than the halting problem. When you get a prompt in Windows 7 asking for "approval", Windows 7 doesn't even tell you what the program is trying to do. That's ridiculous, if an employee asked a boss to allow him to do something with the Company's resources, even an ignorant boss might want to know roughly what he wants to do. Just because the employee is wearing a Microsoft "Trust Me" t-shirt doesn't count.

As for Linux: SE Linux and AppArmor as they are don't help "normal" users much. OSX? The security experts will tell you OSX is not as secure from a technical POV.

Re:Dangerous (2, Interesting)

franl (50139) | more than 4 years ago | (#32240396)

The world's shortest explaination of Godel's Incompleteness Theorem by Raymond Smullyan.

We have some sort of machine that prints out statements in some sort of language. It need not be a statement-printing machine exactly; it could be some sort of technique for taking statements and deciding if they are true. But lets think of it as a machine that prints out statements. In particular, some of the statements that the machine might (or might not) print look like these:

P*x (which means that the machine will print x)
NP*x (which means that the machine will never print x)
PR*x (which means that the machine will print xx)
NPR*x (which means that the machine will never print xx)

For example, NPR*FOO means that the machine will never print FOOFOO. NP*FOOFOO means the same thing. So far, so good.

Now, lets consider the statement NPR*NPR*. This statement asserts that the machine will never print NPR*NPR*.

Either the machine prints NPR*NPR*, or it never prints NPR*NPR*. If the machine prints NPR*NPR*, it has printed a false statement. But if the machine never prints NPR*NPR*, then NPR*NPR* is a true statement that the machine never prints.

So either the machine sometimes prints false statements, or there are true statements that it never prints. So any machine that prints only true statements must fail to print some true statements. Or conversely, any machine that prints every possible true statement must print some false statements too.

Re:Dangerous (1)

Black Gold Alchemist (1747136) | more than 4 years ago | (#32241612)

Basically, it is a consequence of "this statement is false"?

Re:Dangerous (1)

pclminion (145572) | more than 4 years ago | (#32243960)

What would be funny is if we eventually discover that yes, technically there are statements that are true but cannot be printed, but in reality, there is only one such statement, "NPR*NPR*".

This is why the incompleteness theorems don't give me a feeling of helplessness, as they seem to do to other people. Yes, you found an example which shows theoretical incompleteness. But can we construct OTHER statements that are also true but unprintable? If not, then there's no reason to point to the incompleteness theorems as an excuse to throw our hands up.

Re:Dangerous (1)

ircmaxell (1117387) | more than 4 years ago | (#32251066)

What would be funny is if we eventually discover that yes, technically there are statements that are true but cannot be printed, but in reality, there is only one such statement, "NPR*NPR*"

Actually, it's proven that there are an infinite number of them. Here's how it works. Let's call the language introduced in the GP post "SPL" (Statement Printing Language). So, we know that NRP*NRP* is the problem statement in that language. So let's add an axiom to SPL and call it SPL2. Here's the axiom:

  • NPR*NPR* is false

So now SPL2 is complete and consistent, right? Wrong. There'll always be another phrase you could write. What about "NPR*NPR*NPR*"? On the surface, it doesn't look that bad (since it isn't self reference). But let's expand it out a few generations:

  1. NPR*NPR*NPR*
  2. NPR*NPR*NPR*NPR*
  3. NPR*NPR*NPR*NPR*NPR*NPR*
  4. NPR*NPR*NPR*NPR*NPR*NPR*NPR*NPR*NPR*NPR*

As we can see, it's an infinite recursion. Now, from WITHIN the system, it's impossible to tell what happens as you approach infinity. So it's impossible to tell WITHIN THE SYSTEM* if the statement is true or false. Therefore we don't know what to do with it. No matter what we do (print it or not), we will either fill the incomplete criteria or the inconsistent criteria of Gödel. So we could add a new axiom. But this process would repeat forever.

* Note that the big thing that Gödel said implied that systems that can prove their completeness (or consistency) from within themselves are by definition incomplete (or inconsistent). It's possible to prove one way or another from outside that system (inside of a more powerful formal system)...

Re:Dangerous (1)

Hurricane78 (562437) | more than 4 years ago | (#32241896)

That’s a false extension of the original story. You can’t extend it like that. You misunderstood the meaning of the original story.

Re:Dangerous (1)

Slashcrap (869349) | more than 4 years ago | (#32250166)

I think you can draw the same analogy here.

Yeah, but don't - it's fucking terrible.

Re:Dangerous (2, Informative)

smallfries (601545) | more than 4 years ago | (#32235982)

It's an interesting technique, but it is not a guarantee.

The summary doesn't mention the number of assumptions that the researchers make:
+ A working TPM module
+ An adversary limited to memory corruption
+ No unknown faults in the underlying system that can be exploited.

Also the second technique (restricter pointer indexing) relies on performing a static analysis of the target hypervisor and rewriting it into a suitable form. This is not guaranteed to terminate, let alone guaranteed to work, although it does on the small number of test-cases that they considered.

Seems like quite an interesting paper, standard amount of overselling for American academic work (where every paper solves the world) and a shame that the reviewers didn't tone down the claims a touch.

Re:Dangerous (1)

Decker-Mage (782424) | more than 4 years ago | (#32238566)

Agreed on all three points especially point number two which is not an original design at all. It's a state machine with yet another name attached to it (again, sigh), something I've been using as a design technique for over a quarter century now (just over half my life!). That was a major point of irritation here, acting as if it was something new. The minor nit, made almost major by repetition was the use of indexes, where indices is the proper term, however I've become resigned to that of late in academia. I'm not perfect in the use of terminology, but still....

"Quis custodiet ipsos custodes" definitely seems applicable here. I don't think they have this one solved yet, especially without a provably correct TPM which remains level one of the guardians. Too bad. I started using VM's as a defensive tactic back in 2000 via the snapshot capability and golden images kept in encrypted images (the very early days of VMWare, which I must in the interests of plausible deniability reveal myself as an oft beta-tester ;-). Gee was I presciently paranoid or what?

BTW, I love the sig. Don Knuth is something of a minor deity here (usually as revealed through the prophet Robert Sedgewick {grin}) as I've used his algorithms and data structures, and especially proofs of correctness and suitability to problem domains, in my work over the decades.

Re:Dangerous (1)

smallfries (601545) | more than 4 years ago | (#32239064)

I've started using a similar technique myself. Although a windows partition on Bootcamp isn't really a virtual machine it assumes that attacking the Mac partition (which isn't mounted by the windows partition) is a small enough target that malware won't hit it.

The checkpointing / rollback is handled by Winclone that just nukes the relevant partition and updates it to which-ever checkpoint was selected. It seems to work quite well and I haven't had any problems yet when installing questionable software and needing to go back to a clean machine.

BTW, I love the sig. Don Knuth is something of a minor deity here (usually as revealed through the prophet Robert Sedgewick {grin}) as I've used his algorithms and data structures, and especially proofs of correctness and suitability to problem domains, in my work over the decades.

Thanks but the credit is not mine sadly. Somebody used the quote in an argument over PHP a couple of months ago so I stole it as a quote :)

Re:Dangerous (1)

Decker-Mage (782424) | more than 4 years ago | (#32241158)

The original purpose here was to to give 'hacking' (actually cracking to use the correct term) the (AD, DNS, whatever) server limited viability. Once a cracked server was identified, simply restore from an earlier snapshot that predates the crack, patch or otherwise mitigate the vulnerability, and 'drive on'. This would have been especially useful during the days of DNS exploit of the week not so long ago and still seems attractive with the 'China syndrome' we're seeing now (which the press still doesn't report on correctly {sigh}. Now the far better approach is to use virtual appliances on a low attack-surface host OS/bare-metal hyper-visor. Sure, browser appliances are great and I used to use them to cruise the underground for years to keep an eye on what was out there.

I love to experiment beyond the 'bleeding edge' here. It has given me much pleasure over the last couple of decades. It also has the nice effect of keeping me buried in free enterprise grade toys. I do like free ;-).

About the only original quote I have ever come up with (at least I think it is original) is: "Never give entropy a chance!"

Re:Dangerous (1)

Lumpy (12016) | more than 4 years ago | (#32236132)

I Guarentee I can make a OS that is not infectable by malware.

have a PROM made of the OS and run it from there. The only way for it to get infected is to copy it to ram and modify it and then fire a JMP to the ram location for running the new infected code.

Make the pc not capable of running software in RAM and you just made it impossible to infect.

Useability may suffer a tiny bit, but I think customers will be happy with powering off and swapping cartridges to do different things. A cartridge rack could allow you to run more than 1 program!

hypervisor != OS (1)

slashnot007 (576103) | more than 4 years ago | (#32236422)

Okay so One can protect the hypervisor execution. How do we protect the OS and the software the hypervisor's software storage?

There has to be a way to update the hypervisor, and presumably that update comes over the web. You can guarantee the that code will execute in a protected space but can you guarantee you are executing the right code or that the code itself does not have a security hole.

The there is the OS. Presumably this can still be infected. Also presumably some attacks will run in a layer between the hypervisor and OS. That is they will create a virtual hypervisor of a malicious type.

Still it's a great advance. I expect the military and banking industry will be the early adopters.

Re:Dangerous (1)

Technician (215283) | more than 4 years ago | (#32241008)

I found mechanical Write Protect switches that prevent any writes a good security measure. Unfortunately in the world of flashable memory, this leaves many boot items that used to be in ROM open to attack. For any hypervisor, there should be a hardware jumper or switch that write protects it from any writes.

pdf? (3, Insightful)

Cmdr-Absurd (780125) | more than 4 years ago | (#32235644)

Link to a pdf version of the paper? Given recent security problems with that format, does anyone else find it funny?

Re:pdf? (2, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#32235670)

Seems perfectly reasonable to me. Who would care more about provable hypervisor security than somebody with a badly infected guest?

Re:pdf? (1)

batistuta (1794636) | more than 4 years ago | (#32235862)

most pdf issues have to do with the reader and not with the format itself. Not saying that pdf is perfect, but it would be unfair to put Sumatra, Foxit, and Acrobat Reader on the same "pdf" boat.

Re:pdf? (1)

Yvan256 (722131) | more than 4 years ago | (#32235904)

I don't find it funny because not all PDF readers have the same security flaws as the Adobe Reader. Mac OS X comes with a built-in PDF viewer/printer, so why would I want to install anything from Adobe on my computer?

Re:pdf? (1)

tepples (727027) | more than 4 years ago | (#32236430)

I don't find it funny because not all PDF readers have the same security flaws as the Adobe Reader.

That's not always true. Sometimes, Adobe and Foxit both correctly implement a PDF feature that was poorly designed, and they end up having the same vulnerability [slashdot.org] because of it.

Mac OS X comes with a built-in PDF viewer/printer, so why would I want to install anything from Adobe on my computer?

Because GIMP isn't enough, nor are the meager open-source SWF builders. And Wikipedia says Preview does not allow filling in PDF forms.

Re:pdf? (0)

Anonymous Coward | more than 4 years ago | (#32236540)

[quote]
And Wikipedia says Preview does not allow filling in PDF forms.
[/quote]

Wikipedia is wrong.

Help me make Wikipedia not wrong (1)

tepples (727027) | more than 4 years ago | (#32236768)

An Anonymous Coward who is a regular on at least one vBulletin, IPB, or phpBB based forum wrote:

Wikipedia is wrong.

Citation needed. I'd like to read a review of a newer version of Mac OS X that mentions Preview's newly added support for PDF forms so that I can go make Wikipedia no longer wrong.

Re:pdf? (0)

Anonymous Coward | more than 4 years ago | (#32238692)

There are a lot of reasons to have Acrobat on a computer, most of them being irrelevant to most people but crucial to businesses:

1: Ability to obtain Web pages and store them as a PDF archive.

2: PDF/A, PDF/X, and other standards. Not just making files in the subsets of the PDF format, but be able to check that they are compliant to the standards.

3: Ability to sign documents with a certification to protect against tampering.

4: Ability to include attachments. I have encountered firewalls that block essentially all documents (including ZIP and RAR archives) that are not Word, Excel, or PDF. I wanted to send someone a confidential AutoCAD document which their mail filter blocked. They were not authorized to have PGP or gnuPG, so that was out of the question. So, I attached it to a password protected PDF, and that dealt with the problem.

5: Ability to use document repositories. SharePoint or LiveCycle helps a lot when documents start going from thousands to hundreds of thousands.

6: Running "official" software. This helps when showing to a nontechnical auditor that your company is doing "due diligence", and keeps the auditor from going "hmm, I smell something fishy here" and starts wanting to dig deeper and demand more info about a document custody chain. It is a variant of "you can't be fired by buying IBM".

7: An in house IRM sytem. This is built into Windows, and an option with Acrobat. In some cases, this means the difference between loss of confidential documents and knowing they are still encrypted. This is arguably one of the few "good" uses for DRM, and even then, it can be argued against due to keeping people from whistle-blowing.

8: Ease of use. Adobe's products are usable. It is very easy to save Web page confirmations (monthly bills and such) as PDFs, toss them into a folder and forget about them unless they are needed. One can put a stack of papers on my scanner, push "scan to PDF", walk off, then check in Acrobat to ensure that all pages have been OCRed and the scanner didn't skip or glitch in some fashion. And if password protection or size reduction is needed, that is only a few menu options and a "Save As..." away.

9: Ability to easily archive stuff. Click a button in Word, fire up Acrobat to grab a Web subtree, print to an Acrobat printer, or just select a bunch of documents and click "Convert to PDF...". Of course, this is easily done by most other programs, but Acrobat can do all of this.

10: Print shops tend to be standardized on Adobe products.

For most people, printing with CutePDF's PDF generator and using Foxit's reader is good enough. Macs have had innate PostScript handling for a long time, since the days of the NeXT and DPS.

Re:pdf? (1)

noidentity (188756) | more than 4 years ago | (#32236138)

Here's a safe version of the paper: paper.none (0 bytes)

Acrobat (1)

blake1 (1148613) | more than 4 years ago | (#32235686)

...research that will be presented (PDF)...

I wish that I had Hypersafe installed so I could open Acrobat on a virtual machine instead.

Re:Acrobat (0)

Anonymous Coward | more than 4 years ago | (#32236744)

Use Foxit Reader.

If it really guarantees no infection with malware (1)

Chrisq (894406) | more than 4 years ago | (#32235692)

If it really guarantees no infection with malware, then it cannot be update-able or extendible. All it is suggesting is that the hypervisor cannot be altered from within a client operating system. I don't think that this gives you anything that you don't already get with a user-mode virtualisation like virtual box, where the host's system will write-protect pages.

They should have my cousin test this (4, Funny)

NotSoHeavyD3 (1400425) | more than 4 years ago | (#32235694)

Because if anybody could get a machine infected it'd be him.

Re:They should have my cousin test this (1)

roman_mir (125474) | more than 4 years ago | (#32237520)

So where in the machine does he put the penis?

Re:They should have my cousin test this (1)

thegarbz (1787294) | more than 4 years ago | (#32248186)

He doesn't, he just buys crabs from the internet.

Paging Mr. Tarkin (1)

Gothmolly (148874) | more than 4 years ago | (#32235700)

The more you tighten your grip, the more will slip through your fingers.

Run-time (0)

Anonymous Coward | more than 4 years ago | (#32235740)

From what I understand, this is only during runtime, what about attacks which occur during boot-up?

how about the other way around? (1)

fyonn (115426) | more than 4 years ago | (#32235876)

What about securing a VM from the host? so you can run secure corporate VM images on an untrusted host. now that would interest me...

Dave

Re:how about the other way around? (1)

Spad (470073) | more than 4 years ago | (#32235906)

While you're at it, I'd like a pony...

Re:how about the other way around? (1)

Abcd1234 (188840) | more than 4 years ago | (#32236290)

Wait wait... so you want the hypervisor, the thing that's granting access to the various hardware resources and has direct access to virtualized memory, storage, and so forth, by it's very nature... to be untrusted?

Am I the only one that sees a contradiction, here?

Re:how about the other way around? (1)

fyonn (115426) | more than 4 years ago | (#32236650)

I'd like to be able to run a secure VM with a level of assurance that it can't be interfered with from the host upon which it's running. this task may well be impossible, but there's certainly call for it. the classic example being running a corporate VM for access to work, on a member of staff's own computer. the company would not trust that computer, but would want to be able to trust the image. They would want to know that any malware on the host could not affect the VM.

I'm askin if any of the modern technology we're supposed to be seeing on modern computers, like trusted computing, various hardware level hypervisors etcetc can facilitate this. I fully accept that the answer may well be no. :)

dave

Re:how about the other way around? (1)

IBBoard (1128019) | more than 4 years ago | (#32236852)

You mean like running a custom-built Live distro with the apps you need built in? Not exactly what you said, but the same effect. That's what we've got access to at work for remote access over VPN on other hardware.

Re:how about the other way around? (1)

fyonn (115426) | more than 4 years ago | (#32237282)

I was thinking a windows build with appropriate apps and VPN access...

*if* it can be secured...

dave

Re:how about the other way around? (0)

Anonymous Coward | more than 4 years ago | (#32240666)

The answer is no. There is nothing in the universe that can protect the VM from the host.

Trusted computing depends on trusting the hardware. It can always be broken by tapping the system bus.

Re:how about the other way around? (1)

mlts (1038732) | more than 4 years ago | (#32238976)

This is the same battle as DRM fights. Who has control of the host can dump memory images of the VMs at will.

Not like this can't be done, where the VMs are protected from the host. Look how well the PS3 has kept its security without a solid breach, and when it was breached, it was fixed by a ROM update in record time.

I thought we already had a solution? (1)

Yvan256 (722131) | more than 4 years ago | (#32235920)

What about the evil bit [faqs.org] ?

Re:I thought we already had a solution? (1)

leuk_he (194174) | more than 4 years ago | (#32236114)

Sorry, the evil bit is a bad idea. Because to set it you have to write it. and since all executable memory is write protected there is no way to tell the hypervisor of your bad intentions.

The only workarround now is that you cannot do evil updates. But evil updates need a reboot..unless....... with HA options you can move a running VM to an other server, update and infect it with an evil update, reboot, and move the VM back, without the VM ever knowing the host was changed.

Don't you love being evil?

"chmod +evil hypervisor. "

Want guaranteed security? (0, Troll)

Hasai (131313) | more than 4 years ago | (#32236002)

Fill your server full of concrete and chuck it into an active volcano.

Otherwise, there's just varying degrees of risk.

Re:Want guaranteed security? (2, Insightful)

LordBmore (1794002) | more than 4 years ago | (#32236236)

Okay, I filled all of my servers with concrete and tossed them into the volcano. What next? I can't wait to tell my boss how secure we are.

Re:Want guaranteed security? (1)

Hasai (131313) | more than 4 years ago | (#32239576)

I never said you'd be able to access your data, just that it would be secure.

Unethical and unprofessional (1)

gweihir (88907) | more than 4 years ago | (#32236280)

While this sounds like a step in the right direction, any claims of "unhackability", are frankly lies and both unethical and unprofessional in the extreme. Most currently used attacks were never expected and quite a few were regarded as impossible before somebody went ahead and demonstrated them.

On a related note, those technologies advertised as "unhackable", "absolutely secure", "provably secure", etc. consistently fail to deliver. In fact, these claims are usually an indicator of low quality, because they show the people proposing them do not really understand IT security.

Re:Unethical and unprofessional (1)

CAIMLAS (41445) | more than 4 years ago | (#32236684)

On a related note, those technologies advertised as "unhackable", "absolutely secure", "provably secure", etc. consistently fail to deliver.

You must be familiar with SonicWall, then.

They just don't get it ... (1)

BitZtream (692029) | more than 4 years ago | (#32236700)

Why is it no one told these guys that adding more features never adds more security?

Lets go over the x86 history.

Start multitasking, need some sort of memory protection in HARDWARE cause software can't do it.

Realize that software implementations working with the hardware are buggy ... Damn.

Add other protections such as NX ... realize software implementations are buggy ... Damn.

Virtualize the OS into its own little space under a hypervisor, realize its slow and implementations are buggy ... Damn.

Add a hypervisor with hardware virtualization support directly on the chip ... realize that its no longer as slow, but software implementations still are buggy and result in ways to exploit ... Damn.

Now we're at ... add NEW flags for the hypervisor to make it 'secure'.

Do I need to explain what happens next?

They idea that a hypervisor is more secure at all is one of the most ignorant things a hardware or software developer can have. If the OS can't be made secure, neither can the hypervisor. You can argue that its less code so its got fewer bugs but that number is still greater than 0.

I'm going to repeat. If you can't make the OS secure and capable of protecting itself, adding another layer on top isn't going to help you.

Whats better is their 'proof' was written as 'synthetic hypervisor exploits' ... so they made some shit up and it didn't work? Guess what, my Arduino's can't run x86 code, but that doesn't make them secure, it just means I didn't use the right exploit.

When will people learn from history?

Arduino's can't run x86 code (1)

hAckz0r (989977) | more than 4 years ago | (#32240836)

Well there you go, x86 legacy instruction sets, yet another reason to virtualize your Arduino! If you layer enough software (e.g. http://www.multiplo.org/duinos/wiki/index.php?title=Main_Page [multiplo.org] ) on top of your project and eventually we will make it secure. Heck, just add a TPM shield, a few million in research grants, and even more libraries and eventually it will be so much safer to use. </sarcasm>

If someone thinks that adding software is going to do much for security in the long run, then go no further than contemplating what happens when your BIOS, chipset, NIC, GPU, or microcode get re-flashed by an adversary running ring0. If the hypervisor is deemed to be ring(-1) then bios should be considered ring(-2). Game over, but then the machine hasn't even started yet. If you are not even in control of your own hardware (e.g DMA, boot strap load vector) then what does your hypervisor really do for security? Yes its better than not having it of course, because it will prevent something like HyperDBG from inserting itself during run time. He who loads first wins, and usually the firmware devices will boot up faster than the main OS will. Timing counts, and they can just sit in memory waiting their own turn. So should we really be using the 'guarantee' word? Likely not, as long as we have human ingenuity we won't have absolute 'garenteed' security.

how does this guarantee anything (0)

Anonymous Coward | more than 4 years ago | (#32237500)

so the first technique just requires getting your code set up so it will be in the hypervisor upon reboot, then force a reboot. And the second one can probably be bypassed by a bug in the hypervisor code, or your own code injected by bypassing the first technique.

Now if you required someone to have physical access to the machine to update the hypervisor and you could somehow guarantee that there isn't any exploitable bugs in the hypervisor, then that might be something towards keeping external users from remotely compromising a hypervisor, but this looks to just be some checks to say that your hypervisor has some security.

What about glitching the CPU? (0)

Anonymous Coward | more than 4 years ago | (#32239000)

Provable security... Forgive me for laughing out loud. Everytime this is asserted a few months go by and ... surprise ... an expliot. The one posture that is exceptionally unhealthy in the security industry is drum beating to the sound of ones own arrogance.

Does this hypervisor protect the host from CPU glitching?

Performance will suck (1)

FranTaylor (164577) | more than 4 years ago | (#32241466)

If they REALLY make a true firewall around the hypervisor then performance will be terrible.

If you want decent network or display performance on a vm then you have to use special drivers for the virtual devices that bypass the firewall.

We have already seen security flaws in these special drivers.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...