Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Future of Trusted Linux Computing

Zonk posted about 7 years ago | from the lock-down-that-penguin dept.

Security 158

ttttt writes "MadPenguin.org tackles the idea of Trusted Computing in its latest column. According to author Matt Hartley, the idea of TC is quite reasonable; offering a locked-down environment offers several advantages to system administrators with possibly troublesome users. 'With the absence of proprietary code in the mix users will find themselves more inclined to trust their own administrators to make the best choices ... And so long as any controlled environment is left with checks and balances [like] the option for withdrawal should a school or business wish to opt out, then more power to those who want a closed off TC in an open source world." LWN.net has an older but slightly more balanced look at the TC approach.

cancel ×

158 comments

Sorry! There are no comments related to the filter you selected.

But Linux is already trusted. (0, Offtopic)

webmaster404 (1148909) | about 7 years ago | (#21040315)

But Linux and most Linux programs are already more "trusted" then Windows can ever be. From being open source, how can you not trust it? There are no surprises and if you feel so inclined, you can build everything from source to make sure that there isn't any malformed code in the binaries. So how is this news?

O RLY? (-1, Flamebait)

Anonymous Coward | about 7 years ago | (#21040371)

http://en.wikipedia.org/wiki/Backdoor_(computing)#Reflections_on_Trusting_Trust [wikipedia.org]



Ken Thompson's Reflections on Trusting Trust[5] was the first major paper to describe black box backdoor issues, and points out that trust is relative. It described a very clever backdoor mechanism based upon the fact that people only review source (human-written) code, and not compiled machine code. A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.

Thompson's paper described a modified version of the Unix C compiler that would:

        * Put an invisible backdoor in the Unix login command when compiled, and as a twist
        * Also add this feature undetectably to future compiler versions upon their compilation as well.

Because the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead. This version was never released into the wild. It was released to a sibling Bell Labs organization as a test case; they never found the attack.[citation needed]

In theory, once a system has been compromised with a back door or Trojan horse, such as the Trusting Trust compiler, there is no way for the "rightful" user to regain control of the system. However, several practical weaknesses in the Trusting Trust scheme have been suggested. (For example, a sufficiently motivated nigger could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to counter this attack, such as subverting the disassembler; but there are ways to counter that defense, too, such as removing the hard disk and physically examining the program's binary disk image -- security is always a metaphorical arms race.)



Re:O RLY? (2, Insightful)

webmaster404 (1148909) | about 7 years ago | (#21040413)

Which is why if your that paranoid, you look at the source yourself and compile it from that source, its not that hard and there is no way that you somehow got code you didn't want. If you overlooked somthing that is your fault, you compiled it, you looked over the source, thats something you can't do in the Windows world with stealth updates and the like

Re:O RLY? (1)

McDutchie (151611) | about 7 years ago | (#21040509)

Read it again, you're not getting it. The issue is whether you can trust the compiler to produce machine code that corresponds to your source code.

Re:O RLY? (0, Redundant)

Chrisq (894406) | about 7 years ago | (#21040581)

If you are really paranoid compile the compiler with a different compiler. Or use a different compiler to compile two linux systems, and only allow logon to one from a remote shell from the second.

How is this redundant? (1)

Chrisq (894406) | about 7 years ago | (#21040925)

How is this redundant? It might be obvious to some people, but I can't see it said anywhere else.

Re:How is this redundant? (1)

jimstapleton (999106) | about 7 years ago | (#21042643)

obviously wrong maybe.

Sorry, but you can try to recognize patterns in anything including what patterns are found in compilers. It is not always easy, but hard isn't impossible. As soon as you can recognize those patterns, you can write the trojan described.

And (1)

themusicgod1 (241799) | about 7 years ago | (#21041207)

Trusted Computing solves this how?

Re:O RLY? (1)

jimstapleton (999106) | about 7 years ago | (#21040523)

You didn't evevn read the quote? The only way to bypass this is to hand build the compiler in binary. You won't *EVER* see the attack because it's in the compiler's binary, and the compiler puts it in the binary of any compiler it compiles - even if it is not in the source of the compiler it compiles.

Good luck with that.

bypassing Thompson's trojan is simple (1)

hopeless case (49791) | about 7 years ago | (#21041289)

How does the bugged compiler binary recognize the fact that it is compiling the source to a compiler?

In Thompson's case, he had it scan the source for recognizable text.

Defeat the "am I compiling a compiler?" test of the compiler binary and you are done.

All you need is a source code obfuscator. Randomize variable/function/file names, and insert red-herring calling sequences and recompile the source to the compiler to obtain a non-bugged compiler binary.

Writing a source code obfuscator (capable of defeating the compiler trojan's test) is much easier than writing the source to a compiler, and a great deal easier than hand composing a compiler binary.

Turing strikes again! (1, Insightful)

Anonymous Coward | about 7 years ago | (#21041577)

Indeed yes. The question "am I compiling a compiler?" is as undecidable as the question "am I compiling a program that will halt?" (Ken Thompson's suggestion is still interesting, though.)

Re:bypassing Thompson's trojan is simple (1)

jimstapleton (999106) | about 7 years ago | (#21042537)

simple - one of the fields in computers is pattern recognition. Compilers can follow patterns as well. If one is trained or programmed to detect certain code or binary elements commonly found in compilers, then it can affect more than one compiler.

I'm not saying it's easy, I'm just saying it's possible.

Re:bypassing Thompson's trojan is simple (1)

hopeless case (49791) | about 7 years ago | (#21042897)

If you knew exactly how my obfuscator worked, you could probably write a compiler detector to defeat it. However, if I knew how your compiler detector worked, I could write an obfuscator to defeat that. The cycle could then repeat.

Which activity, though, is eaiser to do? I don't know how to prove it, but I think obfuscation is far eaiser than detection.

As the Anonymous Coward replying to me pointed out, writing a program that can always detect when another program is a compiler is as hard as detecting when another program is guaranteed to halt. In other words, it's undecidable.

Re:bypassing Thompson's trojan is simple (1)

jimstapleton (999106) | about 7 years ago | (#21043053)

A detector would definetly be harder than an obtusificator. But look how that worked for MS and security.

100% success may be virtually impossible, but 90% is probably significantly easier, and nearly as dangerous.

Re:bypassing Thompson's trojan is simple (1)

tepples (727027) | about 7 years ago | (#21044849)

Writing a source code obfuscator (capable of defeating the compiler trojan's test) is much easier than writing the source to a compiler, and a great deal easier than hand composing a compiler binary.
How do you know that the Trusted source code obfuscator binary that you downloaded isn't leaving a back door for the compiler to recognize code whose variable names have been obfuscated in this way?

Re:O RLY? (1)

YU Nicks NE Way (129084) | about 7 years ago | (#21041923)

Read the GP post again. Carefully. You have the source, Luke -- and, on the basis of your inspection, you missed the second-order instance of the problem of Trusting Trust.

(I don't know if the GP meant his or her post to be a direct attack on the frequent comment that "well, you have the source and can inspect it, after all", but if he or she did, congrats.)

Re:O RLY? (1)

smilindog2000 (907665) | about 7 years ago | (#21040793)

Ha, you're the first person I've heard mention this idea since the early '80s! Here's another similarly old, interesting factoid I've heard about the C compiler: The ASCII character set is no longer defined anywhere in the C compiler source code (which is written in C). In other words, '&' compiles to decimal 37 only because existing binary compilers know how to translate the '&' character constant.

Re:O RLY? (3, Interesting)

ilikejam (762039) | about 7 years ago | (#21040795)

A sufficiently motivated whatnow?

Re:O RLY? (1)

denison (735014) | about 7 years ago | (#21041349)

For example, a sufficiently motivated nigger could painstakingly review the machine code of the untrusted compiler before using it.
I actually read the quoted text. I was quite surprised to discovered that only a certain class of user could painstakingly review the machine code. The original Wikipedia text was never, as far as I can tell, defaced. So, the posting AC is a wanker.

Re:O RLY? (3, Insightful)

YU Nicks NE Way (129084) | about 7 years ago | (#21041973)

Either a wanker or an extremely clever commenter on the true value of human inspection. I suspect the poster was a wanker, but, oh, my, do I hope that he or she was extremely clever.

Re:O RLY? (1)

notaspunkymonkey (984275) | about 7 years ago | (#21043201)

totally off topic - I was recently involved in a test at work to trial some new software - 180 users were sent a document detailing how to install and configure a VPN application - the instructions contained some bad instructions - which if followed to the letter - would block http access - of the 180 users installing the software - only 3 reported the problem - 177 people did not read the instructions - or read them but did not follow them!

Re:O RLY? (1)

Albio (854216) | about 7 years ago | (#21045851)

There is a chance that the users noticed the problem and then found the "correct" way to install the software and didn't bother reporting it.

Re:But Linux is already trusted. (4, Insightful)

MyLongNickName (822545) | about 7 years ago | (#21040387)

But Linux and most Linux programs are already more "trusted" then Windows can ever be. From being open source, how can you not trust it?

Did you even read the summary? Or were you just going for first post?

This is about locking down the workstation so that users can't monkey around. I do not care how well the code is written, a malicious user can create a security issue if he/she has the ability to do so.

Re:But Linux is already trusted. (0)

Anonymous Coward | about 7 years ago | (#21042609)

Trusted computing, as defined by the folks who pushing for it, applies to ALL machines EVERYWHERE, including those in your home. The idea is that some 3rd party can monkey with your machine regardless of whether you happen to approve of them doing so or not. One of the core ideas of 'trusted computing' is that all machines would render a unique i.d. upon the request of certain third parties, making anonymity an impossibility. It would also allow these third parties to search your computer to make certain that you aren't in possession of data or programs which could possibly violate i.p. laws in some countries.

It isn't about the owner being able to trust his machine; it's about other people being able to query, or even modify, a machine without the owner's consent. In 'trusted computing' it's explicity the owner who ISN'T trusted. It's about the machine trusting other people who aren't the owners over the desires of the person who supposedly owns said machine.

Any argument for the implementation of so-called 'trusted computing' is either inherently evil or incredibly stupid.

Re:But Linux is already trusted. (1)

xeoron (639412) | about 7 years ago | (#21045999)

Good news everyone, with GNU/Linux virtualizing stack, MS Windows can be trusted too...

If the owner controlls all the keys, its fine (5, Informative)

jonwil (467024) | about 7 years ago | (#21040339)

There is nothing wrong with hardware assisted security if the owner controls all the keys and nothing can touch the trusted hardware without the owner specifically installing it (i.e. logging in as root/administrator and changing things).

Trusted Computing is only bad if the owner of the hardware does not have control over the software on the machine, the hardware keys etc.

Re:If the owner controlls all the keys, its fine (1)

arivanov (12034) | about 7 years ago | (#21040693)

Err...

I would say that the owner should be allowed to do anything he likes provided that he cannot fake the keychain.

Example in a pre-baked trusted environment when accessing resource A I sign up with a chain which shows that it is done by me, through software X on kernel Y and hardware Z.

I should not be allowed to fake kernel Y, but there should be nothing to prevent me from installing an alternative signed kernel Y1. Similarly, I should be able to run Y on Z1 or X1 on Y as long as the chain is correctly reported when accessing A.

In other words no tivoisms at least for consumer systems. Z should not be able to prevent me from running Y1, Y should not be able to prevent me from running X1. It should be the access control on resource A which says "I do not like the (Z)(Y1)(X) chain you use, in order to access me you need (Z)(Y)(X) or (Z)(Y2)(X)".

Excuse me but how do I get it signed? (1)

js_sebastian (946118) | about 7 years ago | (#21040981)

I should not be allowed to fake kernel Y, but there should be nothing to prevent me from installing an alternative signed kernel Y1.
Excuse me but how exactly do I get my linux kernel i compiled myself signed? Oh yes I pay a tax to some organization and wait to see if they give me permission in a few months... I don't want hardware that requires anyone's permission but my own to run what I want.

It should be the access control on resource A which says "I do not like the (Z)(Y1)(X) chain you use, in order to access me you need (Z)(Y)(X) or (Z)(Y2)(X)".
If you want me to access resource A only on hardware Z with system Y1 and software X, give me an appropriate locked down system YOU own with the decryption keys for A. Don't try to make the rest of the computing world pay for the costs of your security. And if you are talking about DRM for media, forget it, it is not here to stay.

Re:Excuse me but how do I get it signed? (3, Interesting)

arivanov (12034) | about 7 years ago | (#21041587)

Excuse me but how exactly do I get my linux kernel i compiled myself signed?

SelfSign it. It is not the fact that it signed, it is who sign it which matters. From there on an access request goes down the chain with everyone signing it. The access control for A may like your selfsigned kernel. Similarly, it may not and it will invalidate everything down from it as untrusted. It is A-s "owner" choice.

And if you are talking about DRM for media, forget it, it is not here to stay.

You have mistaken me for someone who gives a fuck about signed MP3s. Now a document sitting on a corporate CMS encrypted individually on every release and with an associtated cert chain for each revision is something I do care about. A lot. A lost laptop in this case no longer means stolen data. The entire problem of document access control also more or less goes away. Same for revision and change control. While it is a hassle it solves quite a few real world problems.

Re:Excuse me but how do I get it signed? (1)

bob.appleyard (1030756) | about 7 years ago | (#21041859)

While it is a hassle it solves quite a few real world problems.
Such as that pesky whistleblower...

The user should be able to swap kernels... (1)

Rix (54095) | about 7 years ago | (#21043631)

Without informing anyone. External entities should be free to *request* specific support software, but the user should always have the right to override that request.

Re:If the owner controlls all the keys, its fine (1)

Henry V .009 (518000) | about 7 years ago | (#21040749)

Trusted Computing is only bad if the owner of the hardware does not have control over the software on the machine, the hardware keys etc.
It's not always bad even then. It depends who the owner of the machine is. If the owner is someone who is easy to socially engineer (90% of users, I'm sure -- Come look at the dancing bears!), then a behemoth corporation is in effect the system administrator for all those people, and locking down machines by allowing only signed applications can make sense. Most people aren't computer savvy and pretending that they are isn't a sane way to make decisions about the computer ecosystem.

Re:If the owner controlls all the keys, its fine (1)

JohnFluxx (413620) | about 7 years ago | (#21041517)

Oh come on, clearly in that case, and in this context, the owner of the computer is the organisation.

Re:If the owner controlls all the keys, its fine (1)

Henry V .009 (518000) | about 7 years ago | (#21041607)

Is it? I may have been talking about Microsoft and 90% of home users.

Re:If the owner controlls all the keys, its fine (1)

js_sebastian (946118) | about 7 years ago | (#21040771)

here is nothing wrong with hardware assisted security if the owner controls all the keys and nothing can touch the trusted hardware without the owner specifically installing it (i.e. logging in as root/administrator and changing things). Trusted Computing is only bad if the owner of the hardware does not have control over the software on the machine, the hardware keys etc.
Exactly.
And in the example mentioned in the summary...

offering a locked-down environment offers several advantages to system administrators with possibly troublesome users.
...the system adminstrator (in fact the company he works for) is the owner of the system and has every right to restrict the user's use of the computer, but wants to have full freedom of how he himself configures those systems.
The current "trusted" computing solutions would restrict the administrator too, because the system trusts some key-issuing authority instead of it's legitimate owner.

Re:If the owner controlls all the keys, its fine (1)

swillden (191260) | about 7 years ago | (#21041343)

The current "trusted" computing solutions would restrict the administrator too, because the system trusts some key-issuing authority instead of it's legitimate owner.

This isn't correct.

The only use of the third party-issued certificate is for remote attestation, where the computer proves that it has a trusted computing module. You can use that capability to build highly-secure remote control, but that's entirely a function of what application software you layer on top, it's not inherent in TC.

Re:If the owner controlls all the keys, its fine (1)

Rich0 (548339) | about 7 years ago | (#21044199)

Remote attestation should be able to be defeated by any system owner. Maybe it is for intercompatibility. Maybe it is for testing. Maybe the owner wants to do something else.

I'd advocate that anybody who buys a computer should be given:

1. A dump of all the keys embedded in their system.
2. A dump of any private keys associated with any public keys embedded in their system.

What they do with them is up to the owner. If they want to download additional keys (such as root CA certificates) they should be welcome to do so. However, trust should be up to the computer owner - not the manufacturer.

Re:If the owner controlls all the keys, its fine (1)

feed_me_cereal (452042) | about 7 years ago | (#21042403)

The authority is there for the same purposes certificate authorities are there on the itnernet right now: as an independent trusted 3rd party to validate keys. That's it. If a computer with a TPM wishes to prove to someone that it is running the software it says it is, you can trust it because the authority says the key it's using is valid. How does this limit what an administrator can do? If anything, it increases what an administrator can do. Now people will trust that said administrator is running a TPM and isn't lying about the software running on the computer.

Re:If the owner controlls all the keys, its fine (0)

Anonymous Coward | about 7 years ago | (#21042737)

Now people will trust that said administrator is running a TPM and isn't lying about the software running on the computer.

Now MS-backed sites will trust that said administrator is using IE and not Firefox.
Now trailer servers will trust that said administrator is using WMP/DRM++ and not mplayer.
Now Office computers will trust that said administrator is using MSoffice2008/Professional Ultimate Experience Edition and not OO.
Now Outlook clients will trust that said administrator is using MS Exchange and not exim.

See the potential for entrenching a monopoly here? If "people" can enforce the use of software on other computers, then they can leverage any gains from the network effect immensely.

Re:If the owner controlls all the keys, its fine (1)

feed_me_cereal (452042) | about 7 years ago | (#21043717)

This is wholly another issue from security, and not one that I was talking about. Anyway, I wouldn't put a lot of stock in your above scenarios. First off, they're very illegal. This isn't an issue of software compatibility keeping a competitor out; it's now, literally, keeping a competitor out. Microsoft has already been sanctioned for the prior, the latter would undeniably be trustbuster time (unless Ron Paul gets elected).

Secondly, regardless of whether or not microsoft would do such a thing, even if they require someone to be running a TPM to view a webpage, they've already blasted the hell out of any sembalence of a web standard. And before you assume that I'm not aware of their current attempts at destorying web standards, don't; I am, and they're nothing close to what you're predicting, but this would basically mean w3 is dissolved.

Besides all of this, trusted computing, as it currently stands, can't do anything close to this. What you're able to certify is that a computer has a certain set of software loaded, not whether IE is accepting your http request or not. A server would have to deny access based on the fact that firefox is running on your computer somewhere, which is pretty silly. Besides, if people wanted to, they could be making steps towards this now. A site could simply deny access based on the fact that someones browser identifies as firefox. Of course, this doesn't neccesarily keep people out, but the fact that hardly anyone does something this stupid is why your scenario is fairly unlikely.

Lastly, you're already fucked anyway. TC is coming whether you like it or not. You can't stop a technology. Your best bet is to prepare. That is, if you're truly afraid of this scenario happening, vote for politicians that are willing to regulate monopolies. You can't stop people from researching this stuff.

Re:If the owner controlls all the keys, its fine (1)

Rich0 (548339) | about 7 years ago | (#21044355)

Besides all of this, trusted computing, as it currently stands, can't do anything close to this. What you're able to certify is that a computer has a certain set of software loaded, not whether IE is accepting your http request or not.

Sure they can. Just ask for attestation that the Super-DRM-windows OS is running. Then connect to the port used by super-DMA-windows and ask for an attestation that IE owns the outbound connection that is hitting your webserver. Super-DRM-windows won't let any other software listen on the port it uses, and uses SSL to encrypt and authenticate all its communications.

Now the remote webserver knows for sure that you're running IE. If the computer owner doesn't have the keys embedded in their TCPM chip they can't do anything about this - if they install another OS then the TCPM attestation will indicate this. The best you can do is find a flaw in super-DRM-windows, but then everybody will just require attestation that you're running super-DRM-windows v2.

Most likely this won't be used for browser monopolies at first - just playback of digital media (sorry - no more media on linux). But I'm sure browsers will be regulated at some point. MS only allows you to run windows update from IE - mostly through the use of IE-only features. However, at some point a clueless bank will want to keep their customers secure and only allow logins from people running well-patched machines - but of course this will be a well-patched WINDOWS machine or maybe a Mac and nobody will think about customers wouldn't rather not dual-boot windows. Oh, forget using VMware - that won't work. And dual-booting might not work either if the bootloader isn't trusted. You might have to install grub or lilo on a CD to boot the OS on your hard drive...

Re:If the owner controlls all the keys, its fine (1)

Rich0 (548339) | about 7 years ago | (#21044455)

TC is coming whether you like it or not. You can't stop a technology. Your best bet is to prepare. That is, if you're truly afraid of this scenario happening, vote for politicians that are willing to regulate monopolies. You can't stop people from researching this stuff.

Nobody wants to stop technology. We just want to make sure it is properly used. The simplest solution is to require that every computer include a printed copy of any and all keys embedded in it, and ideally any private keys associated with any public-keys installed on it. That will keep everybody honest. If TC gets out of control people will write software that lets an average owner just type in their machine key and then any onerous component of TC can be bypassed. You'd still be able to protect from viruses since the average computer owner won't go typing in a 200-character hex key to run some cute program a friend emailed them.

I'd love having TC on my computer if I COULD TRUST IT. A hardware security mechanism that works FOR me is great - one that works AGAINST me is not.

Re:If the owner controlls all the keys, its fine (1)

feed_me_cereal (452042) | about 7 years ago | (#21043771)

additionally...

Now Office computers will trust that said administrator is using MSoffice2008/Professional Ultimate Experience Edition and not OO.


This one can't happen. No one is policing *your* computer, the worst that could theoretically happen is someone *else* will deny service to you if you're not running the right software. If you have the word file, no one can stop you from reading it.

Re:If the owner controlls all the keys, its fine (1)

Rich0 (548339) | about 7 years ago | (#21044573)

This one can't happen. No one is policing *your* computer, the worst that could theoretically happen is someone *else* will deny service to you if you're not running the right software. If you have the word file, no one can stop you from reading it.

Have you read about Palladium? Software will be able to tell the OS to keep a file in "protected storage" where other programs can't read it. Sure, you have the file, but it is encrypted. Most of the OS partition will also be encrypted. The decryption key will be stored in a chip with instructions to only yield the key to a program that has a given hash. The whole system is designed to make it VERY difficult to bypass - even with physical access to the machine. You'd need to tear apart the TCPM chip to get the key out. Fileservers hosting documents will only allow machiens with secure OSes to download them - directly to protected storage.

The design isn't actually that difficult. Sure, like all DRM it is vulnerable to physical-level attacks, but those can be made very expensive. And if a given model of TCPM chip turns out to be vulnerable it will be removed from the chain of trust - no new multimedia for anything running on those systems.

The solution is simple - require anybody selling a computer to give a paper copy of any keys embedded in the hardware. Most people won't use them, but if TC gets out of control then owners can download programs and type in their keys and bypass the whole thing (or keep only the parts they want).

Re:If the owner controlls all the keys, its fine (0)

Anonymous Coward | about 7 years ago | (#21040851)

You're entirely correct -- but when the owner of the computer has control of all of the code and encryption keys, it is no longer called "Trusted Computing."

Trusted Computing is used specifically to refer to the type of cryptographic control over which the user does not have control. When the user has control, it is simply "encryption" and "sandboxing."

Thus, the marketing entities associated with Trusted Computing(*) would like nothing more than for us to accept Trusted Computing conditionally, on the condition that we have the encryption keys for our own computers. Such conditional acceptance is bound to backfire, and any such trust in Trusted Computing is guaranteed to be betrayed -- because the entire impetus behind Trusted Computing is the ability of media and software developers to be able to control media and software after it has left their hands and entered yours.

(*) I don't believe you are such an entity, but it's important to bring up these points when the issue of key control arises.

Re:If the owner controlls all the keys, its fine (2, Informative)

Anonymous Coward | about 7 years ago | (#21041691)

Trusted Computing is only bad if the owner of the hardware does not have control over the software on the machine, the hardware keys etc.


The only problem is that the whole point of Trusted Computing is to keep the keys used to attest to the state of the PCR completely unavailable to the user. Read the spec: https://www.trustedcomputinggroup.org/specs/TPM/ [trustedcom...ggroup.org]

Re:If the owner controlls all the keys, its fine (1)

Rich0 (548339) | about 7 years ago | (#21044643)

You're missing the point. Of course that is what those pushing TC WANT the system to do. It shouldn't be allowed to take off.

There is a lot in TC that is good. You could potentially eliminate viruses, for example, with a hardware-backed chain of trust. The issue is that the chain of trust should lead back to the computer owner - not the computer manufacturer.

Re:If the owner controlls all the keys, its fine (1)

hasmah (1162821) | about 7 years ago | (#21042327)

there is nothing wrong with hardware assisted security if the owner controls all the keys and nothing can touch the trusted hardware without the owner specifically installing it.(i.e. root/administrator) trust is perceived by the system's receiver or user, not by its developer,designer or manufacturer. as user you may not be able to evaluate that trust directly. you may trust the design, professional evaluation or the opinion, but the end it is your responsibility to sanction the degree of trust that you require. so the owner of the hardware must have control over the software on the machine or set all protection mechanism tomake user trust the system.

Re:If the owner controlls all the keys, its fine (1)

hasmah (1162821) | about 7 years ago | (#21042479)

There is nothing wrong with hardware assisted security if the owner contols all the keys and nothing can touch the trusted hardware without the owner specifically installing it(i.e. root/administrator) As a user,u may not be able to evaluate that trust directly,u may trust the design, professional evaluation or opinion but the end, it is your responsibility to sanction the degree of trust that u require. So administrator must set all protection mechanism within a computer system, including hardware,firmware, and software that together enforce a unified security policy over a product system to make the trusted computing good.

The biggest problem with "trusted computing" (-1, Troll)

WCMI92 (592436) | about 7 years ago | (#21040359)

Is that YOU aren't trusted. Sorry, I won't ever accept NOT being the absolute owner of my own computers, which means having complete control.

Who is YOU? Non Free is the real problem. (0, Troll)

Erris (531066) | about 7 years ago | (#21040753)

I won't ever accept NOT being the absolute owner of my own computers

That's good, but at work it's not your computer is it? The level of control you have over your computer at work is proportional to the intelligence of your employer. If you are unfortunate enough to work for a big dumb company, you will be fired for exercising your software freedom in any way. A less stupid company that uses free software will be able to give you the tools you need to get your job done without giving you complete control of your computer. Some workers need more freedom than others. Ultimately, the things the company needs to protect should only be accessible by people and machines that won't leak. Figuring out what really needs to be protected is the tricky part, but all of it should drive every company to free software.

The real problem with "trusted" computing is that it can force use of untrustworthy software and defeat it's original purpose. No company should ever trust it's real secrets with non free software. Control is lost when you have to "trust" a third party that keeps secrets from you. If you are using Windoze, you might as well email the information to Bill Gates.

What kind of secrets does your company actually have? There's customer information, location and movement of valuables, business plans and a host of other information that can be harmful to divulge.

None of this is an excuse to cut into your software freedom at home or even at work. It's just a problem of collective action and responsibility. When you work for a company, there are suddenly a lot of noses at the end of your arm.

Re:Who is YOU? Non Free is the real problem. (1)

dedazo (737510) | about 7 years ago | (#21046815)

Hi twitter [slashdot.org] .

If you are unfortunate enough to work for a big dumb company, you will be fired for exercising your software freedom

I'm having trouble understanding what you mean by "software freedom". Computers are provided by employers to manage tasks and handle data related to your function within the organization. Where exactly does your freedom come into play there? And what does free software do there that "Windoze" doesn't?

What kind of secrets does your company actually have? There's customer information, location and movement of valuables, business plans and a host of other information that can be harmful to divulge.

You don't say.

None of this is an excuse to cut into your software freedom at home or even at work.

Sure it is, if it's company-provided hardware. You really have never had a job at a real company, have you?

You sound like those (former) disgruntled employees at the "big dumb stupid" companies that won't let you exercise your "freedom of speech" by letting you install Kazaa and BitTorrent on the laptop they gave you to do your job. Down with the man!

Re:The biggest problem with "trusted computing" (1)

Tuoqui (1091447) | about 7 years ago | (#21047045)

Parent is not Troll.

And yeah, Trusted Computing it about not trusting the user. You dont think that these companies are gonna get together and say 'We know what is best for you' at some later date when we're all stuck into the Trusted Computing format and lock us all down. Kiss Open Source goodbye because someone will make the argument that Linux cant be trusted because its Open Source and a PHB at one of these hardware companies will (stupidly) agree.

Huh? (2, Insightful)

fitten (521191) | about 7 years ago | (#21040363)

With the absence of proprietary code in the mix users will find themselves more inclined to trust their own administrators to make the best choices


Proof of this statement?

Re:Huh? (-1, Flamebait)

Anonymous Coward | about 7 years ago | (#21040425)

it's just bullshit. the open sores idiots still don't understand that the source being opened or closed means little or nothing to the mass. they have this delusion that makes them come off like the zealots that they are.

What does Trusted Computing mean? (0, Redundant)

Ed Avis (5917) | about 7 years ago | (#21040399)

As I understand it, the meaning of Trusted Computing is not that the system administrator will be able to provide a locked-down system. That has long been available with ordinary security measures on Linux and other systems. Rather it means that even the system administrator - even the owner of the computer - will not be able to make the computer do what he wishes rather than what the record industry or movie studios want it to do. This is done by Intel or others supplying some special hardware which won't reveal its private encryption key unless it detects that authorized, signed code is running. Not authorized by the legitimate owner of the computer who is Intel's customer - no, that wouldn't do at all. Rather, that the code is signed by some third party such as Microsoft and there is a secure boot sequence to prevent 'tampering' (i.e., the computer's owner trying to reprogram his or her system).

I don't think the author of the article has understood what Trusted Computing means at all. He is just talking about thin clients and locked-down systems in school environments, which is not really the same thing.

Re:What does Trusted Computing mean? (1)

MyLongNickName (822545) | about 7 years ago | (#21040535)

I thought the same thing when I first read it. However, it is entirely possible that there are simply two different definitions for the same phrase. Anyone with a better insight on this?

Re:What does Trusted Computing mean? (1)

foobsr (693224) | about 7 years ago | (#21041155)

It is different views of the same thing. Corporate entities (e.g. M$) put the (marketed) emphasis on 'trust' while those concerned with freedom (e.g. EFF) on the possibility of 'control'. Now decide whom you 'trust'.

CC.

Re:What does Trusted Computing mean? (2, Informative)

cpuh0g (839926) | about 7 years ago | (#21040835)

You do not understand trusted computing. It is not about locking down your system.

It is a common fallacy that the primary goal of trusted computing is to enable DRM so the movie studios/RIAA controls your computer. This is simply not true. Trusted computing provides methods by which you, the owner and administrator of your computer, can KNOW, by having a chain of trust that is anchored by keys securely stored on a TPM chip soldered to the motherboard, that the software and hardware in your system has not been tampered with. One *could* use this to enable DRM or other user-unfriendly schemes, but there are many other use cases for trusted computing. Think e-commerce where you can verify the other system and it can verify yours to make sure neither end has been compromised prior to making a transaction.

Policy decisions are made based on the measurements that are returned by the verification process. Trusted Computing does not dictate the policies. If someone (or some company) wants to abuse the system and lock people out of their systems, then that would be bad policy and a bad implementation of TC concepts, but it doesn't mean that all TC applications are bad or are designed to restrict the user's ability to manage their systems as they see fit.

Re:What does Trusted Computing mean? (1)

Hatta (162192) | about 7 years ago | (#21041225)

Think e-commerce where you can verify the other system and it can verify yours to make sure neither end has been compromised prior to making a transaction.

I'm thinking about it, and I don't like it. I can do all my ecommerce today with a free and open system. If my bank demanded I had my OS/browser signed by some certificate authority I couldn't do that. I can't think of any use of this technology that doesn't hurt the software hobbyist.

Re:What does Trusted Computing mean? (1)

Ed Avis (5917) | about 7 years ago | (#21041981)

Yes, there are good uses and bad uses. The technology can certainly be put to work for the user's benefit. Indeed, most digital rights management is altruistic in some sense, since it prevents the user from accidentally infringing copyright and perhaps even committing a crime, which they surely would not want to do.

The fundamental argument is not whether good or bad policies are possible, but about freedom and whether you have control over your own computer. If doing e-commerce, can I program my computer to lie and send back a response saying it is not tampered with even when I have changed the software? If I cannot do this, then I no longer have control over the computer and it is no longer my computer. However, the other end of the e-commerce transaction would be foolish to rely on this no-tampering check. Even if ordinary users cannot break the security on the TPM module, a determined criminal organization probably could.

Re:What does Trusted Computing mean? (1)

cpuh0g (839926) | about 7 years ago | (#21042225)

If doing e-commerce, can I program my computer to lie and send back a response saying it is not tampered with even when I have changed the software? If I cannot do this, then I no longer have control over the computer and it is no longer my computer.

If you *CAN* do what you describe, then your system cannot and should not be trusted in a trusted computing transaction. Providing a provable, secure chain of trust is the fundamental reason for having a TC base. If you can arbitrarily corrupt this chain by "programming your computer to lie", then all bets are off and the trust model is irrevocably broken.

Perhaps the e-commerce use case is not the best example. Perhaps TC will never be acceptable on personal computers for general purpose uses. However, there are business cases where neither party has reason to 100% trust the other without a verifiable chain of trust measurements from the other that can be validated. In those situations, a TC transaction is pefectly reasonable and highly desirable.

I would never say "never", but in general the security of TPMs, and HSMs in general, are resistant to attacks by even the most determined criminals. There will be bugs and there will be exceptions on rare occasion, but they are the best that the industry has to offer at this time. Assume that if they have passed the strict reviews required to be used by NSA, CIA, foreign governments, and the financial industries, that they are pretty fucking solid and tamper proof.

Re:What does Trusted Computing mean? (0)

Anonymous Coward | about 7 years ago | (#21042951)

You do not understand trusted computing. It is not about locking down your system.

It is a common fallacy that the primary goal of trusted computing is to enable DRM so the movie studios/RIAA controls your computer. This is simply not true.


If that's true, why do TC chip makers refuse to consider the option of Owner override [linuxjournal.com] ? That would leave you in full control, while voiding any third party's attempt to lock down your system.

Re:What does Trusted Computing mean? (2, Informative)

Cyberax (705495) | about 7 years ago | (#21040889)

No, "trusted computing" means that hardware can guarantee the integrity of the environment. For example, I'd like to use TPM chip in my Thinkpad to guarantee that my machine will boot only kernels signed with MY key. Also, I very much like the hardware keyring.

Trusted computing is only a problem when YOU are not the owner of the machine and don't have the full control over the TPM module on a new computer (of course, once TPM is set up - it shouldn't be possible to change it without owner's keys).

Re:What does Trusted Computing mean? (1)

Hatta (162192) | about 7 years ago | (#21041319)

Trusted computing is only a problem when YOU are not the owner of the machine

i.e. when you're using services over a network. What happens when microsoft pushes their TPM out and people get used to serving pages only to trusted peers? You thought "this site only works in IE" was bad? Try "this site is cryptographically impossible to read without a full trusted IE/windows system" And it's done all in the name of security.

Re:What does Trusted Computing mean? (1)

Cyberax (705495) | about 7 years ago | (#21041485)

MS has already done it with Vista x64 - it doesn't allow you to install unsigned drivers. TPM will also allow them to be sure that the kernel is not tampered during the startup. But I don't think it adds too much security for evil DRM schemes.

But personally, I'd like to have the same capability to be sure my system is not tampered with by NSA when they examine my laptop during in airport :)

I'm completely new to this TCM thing... (1)

TheVelvetFlamebait (986083) | about 7 years ago | (#21041721)

... not to mention relatively clueless about encryption principles. Sorry if the following questions are glaringly obvious.

How does it work? How will it affect my machine if enabled (i.e. will I notice?)? Could an OEM (I hear Microsoft is distributing PCs nowadays) theoretically set up the TPM to lock down a system pre-purchase? What happens when the TPM blocks something/notices a different checksum?

Re:I'm completely new to this TCM thing... (3, Insightful)

Cyberax (705495) | about 7 years ago | (#21041899)

TPM in Thinkpads allows stores private/public keys in a secure hardware storage.

The kernel is signed and the hardware bootloader checks that the signature is valid (using TPM). So we can at least guarantee that the system is in consistent state during kernel loading. Later we can use numerous methods to control kernel integrity (SELinux, AppArmour, etc.).

Theoretically, Microsoft can make you to use TPM to validate their kernel during booting (because tainted kernel can be used to circumvent DRM).

So we just need to be able to turn off the TPM chip if it's not required.

Re:What does Trusted Computing mean? (1)

Kjella (173770) | about 7 years ago | (#21043087)

Trusted computing is only a problem when YOU are not the owner of the machine and don't have the full control over the TPM module
You mean like, all the time? Because you'll never know the TPM root key, so if there's any TPM'd operating system/application/content you'd like to use, there's no off switch. For building a secure network you just need things signed with your private key telling your master computer, which trusts your key. There's absolutely no need to build any PKI. Instead we got a global "trusted" root that makes sure the software can trust the host, not that the host can trust the software. It's the ultimate in usage restrictions - I can send you a document that you can't print and that'll self-destruct in three days, and your options are only to accept them or not recieve it at all. Your computer is everyone else's bitch, can I put it simpler?

Re:What does Trusted Computing mean? (1)

Cyberax (705495) | about 7 years ago | (#21043229)

Why? I DO know my root key to TPM - I can view all stored keys and manipulate them. After all, it's not more than a hardware keystore and some validating code.

The goal of TPM is to build a secure HOST. I.e. the one which I can trust to be secure during all stages (for example, TPM can guarantee that a malicious hacker has not installed a backdoor into my kernel).

Re:What does Trusted Computing mean? (1)

tepples (727027) | about 7 years ago | (#21045033)

The goal of TPM is to build a secure HOST. I.e. the one which I can trust to be secure during all stages (for example, TPM can guarantee that a malicious hacker has not installed a backdoor into my kernel).
For values of "I" that represent the operating system publisher, this is correct. For example, Linux phones might use hardware similar to TPM to verify that they aren't running unauthorized apps.

No, you don't (1)

Kaseijin (766041) | about 7 years ago | (#21046327)

I DO know my root key to TPM - I can view all stored keys and manipulate them.
The TPM spec requires that the private endorsement and storage root keys never leave the device. If you have a compliant TPM, what you know is not the root key. If you know the root key, what you have is not a compliant TPM.

Re:What does Trusted Computing mean? (0)

Anonymous Coward | about 7 years ago | (#21040945)

The absolutely brutal quality of the moderation taking place in this thread is a great argumaent for the importance of meta-moderation.

Trusted Computing is by definition closed. (4, Insightful)

Spy der Mann (805235) | about 7 years ago | (#21040551)

Or are the users getting their CPUs' source code and recompile them? Or at least call their LinCPUx fans to do it for them?

Trusted Computing requires trusting the CPU manufacturer in the first place. And in this world, where the telcos have disclosed our conversations to the govt without us finding out but several years later, can we really trust that the government hasn't pressured the CPU makers to add a backdoor here and there?

Trusted Computing is practically closed, and incompatible with the spirit of Open Source/Free Software. Ergo, Trusted Computing cannot be trusted. Sorry.

Re:Trusted Computing is by definition closed. (1)

Chrisq (894406) | about 7 years ago | (#21040743)

wire your own computer out out of logic gates [win.tue.nl] !

Re:Trusted Computing is by definition closed. (1)

stinerman (812158) | about 7 years ago | (#21040809)

As others have commented, the gentleman in the article is using TC in a way that isn't the same as we have come to know it. It seems like he's talking about your admin having root access on your box, rather than the DRM controls. Since he's speaking about the former, this really isn't anything new. Most business users don't have admin access to their own PCs. This is standard practice.

In principle, there is nothing wrong with TC, so long as the owner of the PC has the private keys. But this scenario is little more than having root access to one's own box, which is the standard for most home users.

Re:Trusted Computing is by definition closed. (1)

swillden (191260) | about 7 years ago | (#21041689)

Trusted Computing requires trusting the CPU manufacturer in the first place.

Actually, TC has almost nothing to do with the CPU. The TC Trusted Platform Module (TPM) is a separate device that is just another peripheral. Most implementations sit on the USB bus.

Trusted Computing is practically closed, and incompatible with the spirit of Open Source/Free Software. Ergo, Trusted Computing cannot be trusted. Sorry.

Not true. TC is an open specification, and can be used to implement all sorts of different security policies. The TPM is just a peripheral that provides three services:

  • Hashing of data sent to it. Coupled with TC-aware BIOS this can be used to construct a hash that represents the boot state -- essentially a hash of all security-sensitive code that is running.
  • Binding of keys or other data elements to a specific hash state. Basically, the TPM will encrypt a key (or something) with a secret that combines the current hash value with an internal master key. That way you can only decrypt the stuff when you're booted into that "state" (have that hash value).
  • Remote attestation of state and other data elements. The TPM has a public/private key pair and the public key is certified by the TPM manufacturer. The TPM will use the private key to sign the current state along with some other data, and application software can then send this signed data and the cert to an external party.

That's it. Nothing to do with the CPU, and nothing inherently evil.

Using this technology -- especially combined with the new virtualization-capable CPUs[*] -- you can construct all sorts of security policies and enforce them with *very* strong guarantees. Some people want to use this technology to build unbreakable DRM. Others want to make systems that are uncrackable, even if the attacker has root.

TC is just a tool, and like any tool it can be used for good or evil purposes.

[*] Where virtualization-capable CPUs come in is that with a hypervisor running many small virtual machines you can get around the inherent insecurity of large, complex pieces of software. Given a full OS, plus a full set of run-time libraries, plus some app software, odds are that *somewhere* in the chain there will be a buffer overflow or some other weakness an attacker can exploit. And if you can get the authorized software (the stuff that hashes to the "right" value) to do your bidding, you're golden. With a VM the idea is that you can write an extremely minimal "on-the-bare-metal" kind of application that runs on virtualized hardware. You make this application as simple as possible to minimize the chance of holes. Then, you arrange to hash the code of this virtual machine code while you fire it up, and bind the necessary secrets to that state. Now those secrets are only available to that virtual machine, which is, hopefully, lean and tight enough to be secure.

Re:Trusted Computing is by definition closed. (1)

cerberusss (660701) | about 7 years ago | (#21041821)

I don't fear this too much. Suppose this actually happens, i.e. one CPU manufacturer sells CPUs with a "backdoor". Whatever this may be, it allows some level of remote control over the PC.

This is almost certainly discovered. Let's suppose we can't choose for the competitor, because they're in a big conspiracy.

Making CPUs isn't that hard. It's making them the fastest and the cheapest that's hard. There are open source processor designs available, like the LEON core [wikipedia.org] . There are lots of producers of FPGAs [wikipedia.org] on which the LEON core can be synthesized. There are a number of Linux distro's which run on the resulting CPU.

So, when the going gets tough, the tough synthesize their own CPU. :-)

In Soviet Russia... (0)

JK_the_Slacker (1175625) | about 7 years ago | (#21040571)

... the computing trusts you!

please try to hold back the propoganda (3, Insightful)

amigabill (146897) | about 7 years ago | (#21040691)

With the absence of proprietary code in the mix users will find themselves more inclined to trust their own administrators to make the best choices

Sorry, but I think that's putting your words into everyone else's mouths. Or fingertips, or whatever. The vast majority not only don't have this opinion about open vs proprietary code affecting how much they trust the choices their admins make, they also wouldn't have a freakin' clue as to what you're going on about in that sentence. The vast majority don't know what open-source is, how it differs from proprietary source, they don't know any reason why they'd care either way, and they'd probably give you a pretty funny look for attributing this philosophy to them.

I like Linux and open-source, and have an appreciation for it. But I don't trust my admin at work more when he talks about Linux than when he's talking about Solaris. It's his job to make the best choices of any and all products available, and I trust him to choose whichever is most appropriate for our company, even if he feels that happens to be a proprietary product. It's not my place to impose on him to only ever choose open-source, and there's cases in our work where open-source offerings are less ideal.

What fucking "the page"?? & TC is EVIL (0)

Anonymous Coward | about 7 years ago | (#21040719)

Quote: "(Column) - Despite my gripe about the Web site's sparse message..."

Great writing skills these guys possess, so much for reading the fucking article... (should have known better)

TC is a trap. It is there to exclude other operating systems besides windoze vista. It is there to take control away from you. It is BAD. We don't want more subjugation, thanks.

Yeah, see: THIS could work (1)

WheelDweller (108946) | about 7 years ago | (#21040971)

In Linux, there's no 'vending machine' mindset; they won't be charging every time you turn around, just because there's "no other game in town".

Under Windows? Forget it.

Deception (3, Insightful)

IgnoramusMaximus (692000) | about 7 years ago | (#21041045)

These sorts of propaganda pieces have only one purpose: to sneak one past us. Trusted Computing (as presently defined by the corporate founders of the TC Consortium) has two major purposes which are deadly to all things "open":
  • To make sure that the computer can be trusted by a "contents owner" thus precluding the owner of the computer itself from being able to trust it
  • To allow for so-called "remote atestation" which has the effect of 3rd parties (banks and the like) to be able to trust the computer, again to the exclusion of its owner. The additional effect of this is that banks and other online entities will be able to ensure that only Windows systems, with "approved" apps are used. No spoofing of user-agent tags anymore, end of Linux use in most of the commercial Internet.

In short, this article aims to lure the unwary into gullible acceptance of TC with an illusion of completely deceitfully presented and impractical (no one except the mega-corps will ever get the access to the main TPM keys) applications.

Re:Deception (0)

Anonymous Coward | about 7 years ago | (#21042099)

(no one except the mega-corps will ever get the access to the main TPM keys)
You say this now, but I predict they'd be leaked sooner rather than later, and/or tpm modules reverse-engineered (tamper resistant is not tamper proof) and any embedded backdoors found, to reenable end-user control of the systems.

And pretty soon, practical quantum computation will come on-stream and render the basis crypto for TPM moot.

Re:Deception (1)

IgnoramusMaximus (692000) | about 7 years ago | (#21043849)

You say this now, but I predict they'd be leaked sooner rather than later, and/or tpm modules reverse-engineered (tamper resistant is not tamper proof) and any embedded backdoors found, to reenable end-user control of the systems.

Sure, it will happen but it will (unless the TPM makers are total dolts) involve electron microscopes or some other wacky hardware which very, very few people have. We are talking about a hardware hack with a high level of difficulty, which could crimp our style for some while at least.

And pretty soon, practical quantum computation will come on-stream and render the basis crypto for TPM moot.

Only to be outpaced by some new mathematical formulae for even more convoluted and computationaly intensive encryption schemes. Quantum computing is fast, but it is not infinitely fast. All one has to do is come up with something which is convoluted enough to make even fastest theoretically possible quantum computer (which is a finite physical object) crunch numbers for a few millenia.

Re:Deception (0)

Anonymous Coward | about 7 years ago | (#21042973)

The additional effect of this is that banks and other online entities will be able to ensure that only Windows systems, with "approved" apps are used. No spoofing of user-agent tags anymore, end of Linux use in most of the commercial Internet.

In the EU this would be illegal use of monopoly by MS (who would have to be involved). Also, many EU countries use Linux in significant amounts on the desktop, so there is no doubt it would be pursued. Given that the banks etc. would have to wait a long time to ensure that nearly all windows PCs had the hardware and software levels to support it, the EU would have plenty of time to act even if it was really slow.

Re:Deception (1)

IgnoramusMaximus (692000) | about 7 years ago | (#21043673)

In the EU this would be illegal use of monopoly by MS (who would have to be involved). Also, many EU countries use Linux in significant amounts on the desktop, so there is no doubt it would be pursued. Given that the banks etc. would have to wait a long time to ensure that nearly all windows PCs had the hardware and software levels to support it, the EU would have plenty of time to act even if it was really slow.

I am not saying that they will succeed. I am telling you what the plan, and the purpose (as designed) of the whole Trusted Computing concept is.

The only thing which is certain that the Media Pigopolits, Microsoft and many other large corps will try hard to do this.

Many large banks and online retailers will get on the bandwagon as they positively despise the fact that they have to actually do work and support many different browsers on different platforms. Their dream is to make us use one, homogenous platform where we cannot control anything of importance and thus it is possible for them to deploy a cookie-cutter, one-size-fits-all, Lego-blocks style IIS/IE/TrustedActiveX "solution" everywhere without having to worry about compliance with any non-Microsoft standards or applications.

This ain't gonna go over too well in Boston (0)

Anonymous Coward | about 7 years ago | (#21041131)

From an old GNU su manpage:

Why GNU su does not support the wheel group (by Richard Stallman)

Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn't know how to do that in Unix.)

However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root pass- word who sympathizes with the ordinary users, he can tell the rest. The "wheel group" feature would make this impossible, and thus cement the power of the rulers.

I'm on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.

How is this any different... (1)

r_jensen11 (598210) | about 7 years ago | (#21041179)

than having proper permissions set up on a machine and doing a lockdown like what's built in to Gnome? Having proper permissions prevents people from installing shit and running programs that they're not supposed to. Using Gnome's lockdown feature prevents them from fucking up their DE.

Two step ISP's (1)

spectrokid (660550) | about 7 years ago | (#21041473)

In corporate networks, this will just lock down your PC a little more than it already is. Nothing to see here, move on please. It is in the home this shit gets interesting. Do you want your ISP, and possibly MS, to rule your PC? For the typical /. reader, the answer is a clear NO. But what about grandma? Imagine your ISP offering 2 kinds of subscription: a normal, "free" one and a "protected" one. The protected one is firewalled (or at least NAT-ed) at the ISP, with just "sensible" traffic allowed, like HTTP(S), SMTP to the ISP's own server, and with a limit on 50 emails/day. Throw in some MSN and Skype. Have the ISP use TC to inforce patches and anti-virus. I think grandma would be happy for it, it would extend the lifetime of her PC (slower buildup of spyware cruft) and for the rest of us it would cut back on Spam.

Re:Two step ISP's (1)

Microlith (54737) | about 7 years ago | (#21041745)

Well no. They'll require it on ALL PCs like they only support Windows and without TC you'll simply be unable to connect.

Re:Two step ISP's (1)

Cheesey (70139) | about 7 years ago | (#21044655)

Why, that sounds like the future!

2007. The problem with the current "untrusted" Internet is that anyone can join, make themselves effectively anonymous, and take part in terrible crimes that threaten to undermine the infrastructure of society. Such as piracy, child pornography, terrorism, money laundering, Linux, and spam.

2017. Clearly, this could not go on. The solution that has been legally mandated requires the network to be upgraded before 2025, so that all packets have to be digitally signed by the originator. In order to send information on this network, all participating computers must obtain a session key from the Digital Restrictions Ministry. This session key will only be provided to users who can authenticate themselves on the network using the chip in their identity card or forearm, and then only if their computer is running an officially approved set of Microsapple applications, complete with official spyware from the National Security Ministry.

By removing anonymity from the network, and ensuring central control of all information passing over it, the Government will ensure that no-one will be able to use the network for any criminal purpose. Finally, our children will be safe, terrorists will have no way to criticise the Government, and pirates won't be able to skip the adverts at the beginning of films.


Sounds pretty good to me!

Not so useful, exploitable, and bad people like it (2, Interesting)

Zigurd (3528) | about 7 years ago | (#21041987)

Trusting "trusted" computing requires trusting hardware makers that can insert exploits. Trusted computing is therefore of limited value to end-users in a world where vendors and service providers are routinely leaned on to allow surveillance back doors.

If you have applications that you need to secure, in order to prevent, for example, misuse of tax filings or medical records, you can do it using Web applications, or other thin client technologies combined with physical security of client computers. There is nothing that can guarantee stopping someone copying data manually from a screen display and smuggling it out of an office, so there are practical limits to securing data beyond which additional technology is pointless.

There are some theoretical cases where trusted computing could benefit individuals. But, in practice, it's all about someone else trusting your hardware to rat you out. Most of the money flowing in to trusted computing comes from those kinds of uses. "Trusted computing" has rightly earned distrust.

Why Overlook The Cool Features (1)

logicnazi (169418) | about 7 years ago | (#21042577)

Trusted computing also enables a real market in CPU time. You can sell your spare processor cycles since the trusted machine can attest to the fact that this really was the result of the code you sent out. Similarly to have software agents that run on unknown people's servers this would be necessary.

It would also be useful to implement true ecash schemes and in allowing true p2p based virtual worlds/games with safegaurds against cheating.

In short the technology offers a lot more promise than mere security and eventually it is a good thing for everyone to have. In fact I think it potentially offers more benefits for a stable OS like linux than windows. You can't blame the technology for the fact that some idiots would have us use it for DRM or other customer control. The correct response is to embrace trusted computing and reject DRM...but in the real world perhaps it is better if we wait a bit longer for TC until the RIAA and other groups are forced to learn that selling music unprotected is the way to go.

Re:Why Overlook The Cool Features (2, Interesting)

Cheesey (70139) | about 7 years ago | (#21044275)

Yes, there are certainly benefits. I changed my mind about TC when I needed my own machine to boot up in a trusted state, so that I could be sure that it was safe for me to unlock my encrypted filesystems without the keys being stolen by a trojan. Without a TPM, the only way to do this is to boot from removable media, since an unencrypted kernel on disk could be modified by an attacker. But a TPM could be used to store a key-unlocking-key that would only be available to kernels with my digital signature. Under the control of the owner, TC is useful.

It is a shame that TC almost certainly will be abused in various ways, enforcing DRM on media, games and applications, and creating new ways for major software vendors to lock users into their products. I don't like that possibility at all. Worse still is the possibility that remote attestation might eventually form part of the requirements for connecting to the Internet: that move would suit Apple and Microsoft (goodbye third-party OSs and web browsers), and it would suit organisations wishing to control the movement of information, such as oppressive Governments and record companies.

But fortunately TC was never designed to be secure against owner tampering, and I suspect it will always be possible to get the private key out of the TPM by using differential power analysis (DPA), if you are sufficiently motivated to do so. I have heard that it is actually impossible to prevent DPA entirely: the most a chip manufacturer can do is make it take more time. Laws like the DMCA would make this type of hacking illegal, but I doubt that would stop anyone, any more than the DMCA has stopped people using DeCSS.

Open vs Closed Trusted Computing (4, Interesting)

SiliconEntity (448450) | about 7 years ago | (#21043217)

Unfortunately there are several DIFFERENT, INCOMPATIBLE concepts being bandied about under the name Trusted Computing. This new "Trusted Computing Project" took on that name seemingly without being aware that there was substantial work already under way on a different concept with the same name.

Perhaps to try to remedy the confusion, we can distinguish between TC as proposed by the Trusted Computing Group [trustedcom...ggroup.org] and other forms of TC. The TCG is an industry consortium with Microsoft, Intel, HP etc., dating back several years, originally called TCPA. Their proposal has always been controversial but IMO misunderstood.

TCG's flavor of TC is fundamentally open. I would call it Open Trusted Computing, OTC. It does not lock down your computer or try to prevent anything from running. It most emphatically does NOT "only run signed code" despite what has been falsely claimed for years. What it does do is allow the computer to provide trustworthy, reliable reports about the software that is running. These reports (called "attestations") might indicate a hash of the software, or perhaps a key that signed the software, or perhaps other properties or characteristics of the software, such as that it is sandboxed. All these details are left up to the OS, and that part of the technology is still in development.

Open Trusted Computing runs any software you like, but gives the software the ability to make these attestations that are cryptographically signed by a hardware-protected key and which cannot be forged. Bogus software can't masquerade as something other than it is. Virus-infected software can't claim to be clean. Hacked software can't claim to be the original. You have trustworthy identification of software and/or its properties. This allows you to do many things that readers might consider either good or bad. You could vote online and the vote server could make sure your voting client wasn't infected. You can play online games and make sure the peers are not running cheat programs. And yes, the iTunes Music Store could make sure it was only downloading to a legitimate iTunes client that would follow the DRM rules. It's good and bad, but the point is that it is open and you can still use your computer for whatever you want.

This is in contrast to some other projects which may or may not call themselves TC but which are focused on locking down the computer and limiting what you can run. The most familiar example is cell phones. They're actually computers but you generally can't run whatever you want. The iPhone is the most recent controversial example. Now they are going to relax the rules but apparently it will still only run signed software. This new "Trusted Computing Project" is the same idea, it will limit what software can run. Rumors claim that the next version of Apple's OS X will also have some features along these lines, that code which is not signed may have to run in sandboxes and have restrictions.

This general approach I would call Closed Trusted Computing, CTC. It has many problematic aspects, most generally that the manufacturer and not the user decides which software to trust. Your system comes with a list of built-in keys that limit what software can be installed and run with full privileges. At best you can install more software but it is not a first-class citizen of your computer and runs with limitations. Closed Trusted Computing takes decisions out of your hands.

But Open Trusted Computing as defined by the TCG is different. It lets you run any software you want and makes all of its functionality equally available to anyone. P2P software, open-source software, anything can take full advantage of its functionality. You could even have a fully open-source DRM implementation that used OTC technology: DRM code that you could even compile and build yourself and use to download high-value content. You would not be able to steal content downloaded by software you had built yourself. And you could be sure there were no back doors, no privacy infringements, nothing going on that you don't know about. It would allow a new level of openness and transparency while still increasing security beyond anything we know today.

I have long been an advocate of Trusted Computing, by which I have always meant what I now call Open Trusted Computing. OTC is the way forward for computing technology. Closed Trusted Computing is a step backward and takes away power and flexibility from the hands of the computer user. I hope you will join me in advocating openness as a fundamental principle, as Trusted Computing technology becomes more widely distributed.

firstP po5t (-1, Offtopic)

Anonymous Coward | about 7 years ago | (#21043895)

are looking v3ry They'r3 gone Came

A good read... (1)

Temujin_12 (832986) | about 7 years ago | (#21044289)

...about the ramifications (both good and bad) of TC can be found here [cam.ac.uk] .

The main problem I have with TC is the fact that it removes control over the hardware from the user and gives it to a 3rd party entity.

When I purchase hardware, I expect to have full control over it's capacities. If the hardware is capable of doing something, I should be able to do it. There's something a bit eerie about giving your computer a command/instruction and having it come back and tell you it could do it, but that it won't (2001: A Space Odyssey [imdb.com] anyone!?).

My worry is that TC misinformation will be pushed so much that the idea of the user being in control of their hardware will be considered old fashioned. Well, it may be old fashioned, but it also has the side effect of being correct.

Now, I do think that TC has a place in the corporate world where there is no expectation of employees being able to do whatever they want on the computer (businesses have a right to control their own equipment). But the propagation of TC into the public or home is what doesn't set well with me.

Sellout Alert! (0)

Anonymous Coward | about 7 years ago | (#21044801)

I wonder how much someone paid for his opinion... There's a reason why that site is barren, and the forums are filled mostly admin created polls.

Just in case: Trusted Computing film (1)

cheros (223479) | about 7 years ago | (#21046039)

It's quite helpful to watch as a primer/refresher: the wonderful animation [lafkon.net] about Trusted Computing. Simple, good, understandable.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?