Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Building Deception Into Encryption Software

Soulskill posted about 9 months ago | from the would-be-better-to-build-decepticons dept.

Encryption 106

holy_calamity writes "MIT Technology Review reports on a new cryptosystem designed to protect stolen data against attempts to break encryption by brute force guessing of the password or key. Honey Encryption serves up plausible fake data in response to every incorrect guess of the password. If the attacker does eventually guess correctly, the real data should be lost amongst the crowd of spoof data. Ari Juels, who invented the technique and was previously chief scientist at RSA, is working on software to protect password managers using the technique."

cancel ×

106 comments

Sorry! There are no comments related to the filter you selected.

I discovered a long time ago. (1)

Anonymous Coward | about 9 months ago | (#46102625)

I speak in Navajo with a Southern accent.

Re:I discovered a long time ago. (2)

davester666 (731373) | about 9 months ago | (#46102965)

Are you sure it isn't Southern with a Navajo accent? They sound very similar.

Re:I discovered a long time ago. (1)

slick7 (1703596) | about 9 months ago | (#46107169)

Are you sure it isn't Southern with a Navajo accent? They sound very similar.

It's more like the fourteenth amendment with a slight accent of freedom.

This is more of authentication than encryption... (2, Interesting)

mlts (1038732) | about 9 months ago | (#46102643)

TFA was murky, but generating bogus data? If one is brute forcing a data blob, how can it make stuff up? Authentication is another story.

Are they meaning to make a system similar to Phonebookfs? This is an interesting filesystem used with FUSE. You have different layers over the same directory, so one encryption key may allow you to grab one set of files, another key, a different set. Then there is chaff present that cannot be decrypted under any circumstances and provides plausible deniability.

Is something like phonebookfs what they are intending?

Re:This is more of authentication than encryption. (5, Informative)

js_sebastian (946118) | about 9 months ago | (#46102733)

TFA was murky, but generating bogus data? If one is brute forcing a data blob, how can it make stuff up?

Actually, it wasn't murky. That it cannot work for arbitrary data types is spelled out towards the end. This is for data of which the encryption system knows the data type well enough to fake it, and the encryption system has to be built to target the specific data type. The examples given are credit card numbers or passwords.

For instance imagine a password manager that, for every decryption attempt with a wrong master password, returns a different set of bogus but plausible passwords. How would a brute force attack automatically determine which one is the "real" set of passwords of the user, even if it can guess the right password?

Re:This is more of authentication than encryption. (2, Insightful)

Anonymous Coward | about 9 months ago | (#46102875)

This works provided you don't have a known cleartext to test against. So if I had a known credit card or password in the database (by signing up legitimately for a website that uses th is) then I have a method of determining the dataset to be decrypted.

Re:This is more of authentication than encryption. (3, Insightful)

Joce640k (829181) | about 9 months ago | (#46102925)

Why would an attacker be using the enemy-provided 'honey' program to try to brute force the decryption?

Surely he'd use a program that isn't known for serving up fake results.

Re:This is more of authentication than encryption. (1)

Anonymous Coward | about 9 months ago | (#46102987)

You could set up the encryption so that any form of operation returns a viable result, ie. base the encryption on a dictionary of words or phrases, markov chains etc. I think of it in terms of this. https://www.cs.cmu.edu/~odonnell/hits09/gentry-homomorphic-encryption.pdf or ECC

Re:This is more of authentication than encryption. (2)

icebike (68054) | about 9 months ago | (#46103711)

The focus of that research is to allow operations on data that remain encrypted, and where the actual content of the manipulated data is not explicitly known.

That might work for something composed of tables of numbers, bank data, Phone call pen register logs, or passwords as the GP suggests, but not for text.
Humans are very good at determining gibberish from prose, or fragments of color from images. Plausible, but bogus, is a tough nut to crack
where human evaluation is involved.

Re:This is more of authentication than encryption. (1)

cbiltcliffe (186293) | about 9 months ago | (#46105919)

That might work for something composed of tables of numbers, bank data, Phone call pen register logs, or passwords as the GP suggests, but not for text.
Humans are very good at determining gibberish from prose, or fragments of color from images. Plausible, but bogus, is a tough nut to crack
where human evaluation is involved.

The whole point of brute-forcing is that you don't need human evaluation. Are you really planning on a human evaluating the results from all 2^128 possible keys? How many universe lifetimes do you have to crack this thing?

Re:This is more of authentication than encryption. (1)

icebike (68054) | about 9 months ago | (#46106055)

But don't you suppose that computer systems that can distinguish gibberish composed of valid words from meaningful sentences?

There is more than a little research in this area [wikipedia.org] .

Re:This is more of authentication than encryption. (0)

Anonymous Coward | about 9 months ago | (#46107167)

I understand what you're saying, but what I was getting at with the reference to gentry and ECC are that operations on encrypted data bring you to another step. So, if instead of stepping through a homomorphism or a binary group for ECC, you could step through combinations of a markov chain. This was all to tell the parent that the 'honey' program and fake data are baked into the algorithm, so its not about using a program that returns fake data when you give the wrong passphrase/key.

Re:This is more of authentication than encryption. (1)

AmiMoJo (196126) | about 9 months ago | (#46108893)

It doesn't need to stand up to human scrutiny to foil a brute force or dictionary attack though. The whole point of those attacks is to try a large number of keys quickly, and human oversight would make the process too slow to be of any use.

Re:This is more of authentication than encryption. (1)

ComaVN (325750) | about 9 months ago | (#46107947)

Think of it as ROT-n encryption of random data, where n is the key

If you choose the wrong n, you'll still get blob of random data back, just not the correct one.

Now, the tricky part is in making sure the incorrect keys returns data that's hard to distinguish (meaning it can't be done automatically and/or quickly) from the correct plain text, when the plain text ISN'T random looking, but something like passwords, SSN, credit card numbers.

Re:This is more of authentication than encryption. (2)

Impy the Impiuos Imp (442658) | about 9 months ago | (#46102945)

The crooks would use their own decoder to get at the internal encryption algorithm, skipping the "oop, fail, generate plausible password" wrapper.

Re:This is more of authentication than encryption. (2, Informative)

Anonymous Coward | about 9 months ago | (#46103105)

No, the idea is that the protection is built into the algorithm itself. Rolling your own decryptor would spit out the same fake info for the same key. To balance this out, the algorithm works only for limited types of data.

Re:This is more of authentication than encryption. (1)

SQLGuru (980662) | about 9 months ago | (#46103155)

If you are setting it up, you could have discreet records individually encrypted. One row correctly encrypted and (some number) of rows of fake data encrypted with a false key. Without knowing which row is good and which row is bad, an attacker would get (some number) of potential keys that return realistic but wrong data. They wouldn't need to run my decryption routines to generate these false positives.

Re:This is more of authentication than encryption. (4, Interesting)

Phreakiture (547094) | about 9 months ago | (#46103319)

Consider a case of a credit card number. A CC# consists of 15 digits plus a check digit for 16 digits total.

Now, in encrypting, validate the check digit and then drop it. Take the remaining 15 digits and express them as a binary value. It should be around 50 bits. XOR it against a 50-bit mask, and that will be your ciphertext value.

To decrypt, XOR against that same value and recompute the check digit.

Any incorrect value will produce a number that passes basic validation (as long as it doesn't exceed 2^15).

For bonus points, you can probably encode the first digit in only 2 bits, because most cards begin with 3, 4, 5 or 6, depending on the issuer.

Now, is this a good encryption scheme? Maybe not, but it does at least demonstrate the concept.

Re:This is more of authentication than encryption. (0)

Anonymous Coward | about 9 months ago | (#46108219)

I think that's the point...it blends authentication and encryption into the same synergistic entity. You mess with the blob, and it dynamically reconfigures itself against you.

Re:This is more of authentication than encryption. (2)

bws111 (1216812) | about 9 months ago | (#46103187)

I think the point is that the encryption algorithms themselves are incapable of producing anything that does not look like a 'real' result. For instance, if you have a credit card number you could encypt is as just a series of characters. But that makes it easy to determine which keys are wrong, because decrypting with them would return something other than a string of 16 digits. But what if you treated those 16 digits as a number and encrypted it as such? Then, no matter what key was tried, you would also get back a valid number which could be represented as 16 digits, so you have no way of knowing which is the real answer.

Re: This is more of authentication than encryption (0)

Anonymous Coward | about 9 months ago | (#46103267)

That's mostly correct, but a credit card number is actually only 15 digits with a 16th digit that is used as a checksum. Therefore, you'd have to discard the checksum digit and encrypt only the 15 digits that matter; decrypting it would produce a 15-digit number and the decryption algorithm would need to regenerate the checksum digit. That way, any attempt to decrypt would result in a potentially-valid credit card number, but only the correct decryption key would result in the correct number.

As was pointed out, this requires the encryption/decryption process to know how a credit card works and what makes one valid or invalid.

Re: This is more of authentication than encryption (1)

skids (119237) | about 9 months ago | (#46104157)

As was pointed out, this requires the encryption/decryption process to know how a credit card works and what makes one valid or invalid.

...and/or for the cleartext to include many more fake but valid-looking results than valid results, among which only the correct key picks the valid results and invalid keys pick incorrect results. That requires that invalid but plausible results are either easy to fabricate or can be fabricated offline rather than in realtime. Under certain circumstances, partial exposure of correct results, mixed in fake results, might even be desireable

Re: This is more of authentication than encryption (1)

bws111 (1216812) | about 9 months ago | (#46104543)

There are no 'fakes'. Every decryption produces a number that could be a credit card number (ie has the right format and passes the tests for number correctness). This is not a pre-generated 'fake', it is just the result of running the decryption algorithm with the wrong key.

As an example I gave elsewhere - generate a random block of data (/dev/random or whatever). Encrypt it with whatever algorithm you want. Now decrypt it with the wrong key. Can you tell (without looking at the cleartext) that the decryption used the wrong key? No, of course not, random data looks like random data. That is because no matter what key you use valid-looking data will be returned. Now use an algorithm that can only encrypt/decrypt credit card numbers, and you have the same thing. Every decryption will look like a credit card number.

Note that some of the generated numbers may actually BE valid credit card numbers. That does not matter as long as it is not the credit card number associated with the name on the card, etc.

Re:This is more of authentication than encryption. (1)

Mr D from 63 (3395377) | about 9 months ago | (#46103087)

I see it as a plausible approach, since a automated brute force attack will stop whenever it thinks it hits the right pw, requiring human intervention to determine it is bogus. After a few thousand bogus hits, I think that human might just get tired of hitting the NEXT button.

Re:This is more of authentication than encryption. (1)

HiThere (15173) | about 9 months ago | (#46104597)

But even that requires that the human have an easy way to tell whether the result returned was valid. In the case of a password, this may be simple, but relatively time consuming...or it may be difficult. Of course, in other circumstances, it could automatically try the result against a test, and quickly determine whether or not it was correct.

Re:This is more of authentication than encryption. (1)

pr0fessor (1940368) | about 9 months ago | (#46103297)

Your assumption also means that the underlying encryption is an unknown algorithm and you have no access to the actual encrypted data. There has to be something else to keep you from attempting to decrypt the data without using the password manager it was made with.

Re:This is more of authentication than encryption. (2)

bws111 (1216812) | about 9 months ago | (#46103483)

No. Consider this: today encryption algorithms work on binary data (bytes). Suppose I generate a random block of binary data, and encrypt is using whatever well-known algorithm you tell me to use. I give you the encrypted output. Can you tell what key was used to perform the encryption, or tell me what the original data was? No, because no matter what key you use you will always get back a random block of data, so which is the 'correct' data?

Now suppose, instead of using an algorithm that can encrypt (and thus decrypt) and random binary data I use an algorithm that can only encrypt/decrypt a credit card number or a password. No matter what key you try to use to decode, you will always get something that looks like a credit card number or password. You can know the algoritm, and you can have the encrypted data, and you still have no way of knowing which key is correct because all the results look the same.

Re:This is more of authentication than encryption. (1)

pr0fessor (1940368) | about 9 months ago | (#46103645)

Possibly with a credit card a 16 digit number all decryption returns a 16 digit number however credit cards aren't random numbers there are rules to how they are formed. Just like alpha numeric passwords are still likely to resemble a word or phrase.

Re:This is more of authentication than encryption. (1)

bws111 (1216812) | about 9 months ago | (#46104375)

That is not the point though. The point is that domain-specific encryption could be harder to brute-force than generic encryption that can produce results which are obviously false. So a credit card isn't just 16 random digits - so what? What is not random - the issuer ID and the check digit (there well be more, it does not matter). So treat those 'non-random' parts differently. Don't even bother storing the checksum, recalculate it when returing the result. The checksum will always be right then. There are more issuer IDs than issuers? So store the issuer ID as something other than the actual number.

As for passwords, this is not to prevent poor password choices. If good passwords are enforced, then the 'likely to resemble a word or phrase' doesn't apply.

Re:This is more of authentication than encryption. (1)

pr0fessor (1940368) | about 9 months ago | (#46104697)

As a broad theory it works but in practice.... there is a lot to take into account we will just have to wait and see.

Re:This is more of authentication than encryption. (0)

Anonymous Coward | about 9 months ago | (#46103621)

How would a brute force attack automatically determine which one is the "real" set of passwords of the user

Simple, if your attempted crack produces >5% of passwords equal to '123456' then you have found the right one.

To clarify (0)

Anonymous Coward | about 9 months ago | (#46109303)

That's exactly what drags down the process. You'll need to go through each of the passwords for every brute force attempt.

Also, if that password is banned, that won't work at all. Which it should be anyways.

Re:This is more of authentication than encryption. (0)

Anonymous Coward | about 9 months ago | (#46102793)

Just guessing but if it's aimed at protecting password managers it'll only be a small amount of data, so the flow could be:
generate N sets of fake data and fake passwords for a similar structure to the real data and password; encrypt each set of data with its own password; merge the resulting data. Now when you try any password it'll show the related set of data clearly, but with a huge amount of gibberish surrounding it. Sooner or later the correct password fror the real data will be used but the attacker won't know that it's really the real one.

Re:This is more of authentication than encryption. (2)

hawguy (1600213) | about 9 months ago | (#46102817)

TFA was murky, but generating bogus data? If one is brute forcing a data blob, how can it make stuff up? Authentication is another story.

It didn't seem all that murky:

. But he notes that not every type of data will be easy to protect this way since it’s not always possible to know the encrypted data in enough detail to produce believable fakes. “Not all authentication or encryption systems yield themselves to being ‘honeyed.’”

So it only works with data where it can generate believable fake data -- like credit card numbers or passwords.

Re:This is more of authentication than encryption. (3, Interesting)

Splab (574204) | about 9 months ago | (#46103343)

Many a years ago I had a phone that included a password storage application. You gave it a 4 digit pin and it would show you a checkword, then list all your passwords (key->value). If the pin was wrong, it would still give you a checkword, but different from what your correct word get and then list all the same keys, but different passwords.

Was a pretty nice application, but can't remember the make of the phone, probably a Sony-Ericsson.

Re:This is more of authentication than encryption. (1)

ami.one (897193) | about 9 months ago | (#46107449)

I see only one problem - Assuming I got the PIN wrong, how am I supposed to make out if it is showing me the 'wrong'/'fake' passwords ?

Re:This is more of authentication than encryption. (1)

Splab (574204) | about 9 months ago | (#46108581)

The checkword, you put in your pin when you set the application up, it shows you "banana" (or car or pink or whatever maps to your pin - obviously the wordlist needs to be shuffled during install so no two installs has the same mapping).

Next time you use it and you put in a wrong pin, it will say "apple". You know it's supposed to say banana so you know you put your pin wrong, your adversary (the guy just stole your phone), doesn't know the checkword so he wont know if it's right or wrong.

Re:This is more of authentication than encryption. (1)

ami.one (897193) | about 9 months ago | (#46108627)

OK, that's a good idea.

interesting idea (1)

Anonymous Coward | about 9 months ago | (#46102703)

So you decrypt something and it *looks* like real data.

So it would have to be a function that produces 'good' results and 'bad results' but the bad results look like good ones.

Would have to be careful that the 'bad' results do not do things like open the lock though. For instant in the case of login list breaches.

Re:interesting idea (3, Interesting)

hawguy (1600213) | about 9 months ago | (#46102769)

So you decrypt something and it *looks* like real data.

So it would have to be a function that produces 'good' results and 'bad results' but the bad results look like good ones.

Would have to be careful that the 'bad' results do not do things like open the lock though. For instant in the case of login list breaches.

If randomly generated "fake" data matches someone else's password (or whatever is being encrypted), that other person didn't use a strong enough password. This system is just acting like a hash function -- criminal tries password A and he decrypts the data to some string, then he tries password B and the data gets decrypted to another string. If those randomly generated strings happen to match someone elses password on the system, the criminal could have saved himself some time by generating the password guesses himself.

Re:interesting idea (1)

achbed (97139) | about 9 months ago | (#46103067)

So you decrypt something and it *looks* like real data.

So it would have to be a function that produces 'good' results and 'bad results' but the bad results look like good ones.

Would have to be careful that the 'bad' results do not do things like open the lock though. For instant in the case of login list breaches.

If randomly generated "fake" data matches someone else's password (or whatever is being encrypted), that other person didn't use a strong enough password. This system is just acting like a hash function -- criminal tries password A and he decrypts the data to some string, then he tries password B and the data gets decrypted to another string. If those randomly generated strings happen to match someone elses password on the system, the criminal could have saved himself some time by generating the password guesses himself.

What's the goal here - to make the returned data "not my data", or "incorrect data"? There is a world of difference between these two. "Not my data" is a simple thing to generate, but could still be correct data. IE, if the data protected is a card number, and the generated number matches someone else's card, then do we care or not? The criminal doesn't care, as long as their goal is met (get a valid card - it doesn't have to be yours). If we're talking about "invalid" data, then we need some mechanism to validate the generated data before it's returned. While this wouldn't meet the criminal's goal, it could open a possible DDOS attack vector on the validation service (ie, a brute force becomes a magnified reflection attack).

Re:interesting idea (2)

hawguy (1600213) | about 9 months ago | (#46103107)

So you decrypt something and it *looks* like real data.

So it would have to be a function that produces 'good' results and 'bad results' but the bad results look like good ones.

Would have to be careful that the 'bad' results do not do things like open the lock though. For instant in the case of login list breaches.

If randomly generated "fake" data matches someone else's password (or whatever is being encrypted), that other person didn't use a strong enough password. This system is just acting like a hash function -- criminal tries password A and he decrypts the data to some string, then he tries password B and the data gets decrypted to another string. If those randomly generated strings happen to match someone elses password on the system, the criminal could have saved himself some time by generating the password guesses himself.

What's the goal here - to make the returned data "not my data", or "incorrect data"? There is a world of difference between these two. "Not my data" is a simple thing to generate, but could still be correct data. IE, if the data protected is a card number, and the generated number matches someone else's card, then do we care or not? The criminal doesn't care, as long as their goal is met (get a valid card - it doesn't have to be yours). If we're talking about "invalid" data, then we need some mechanism to validate the generated data before it's returned. While this wouldn't meet the criminal's goal, it could open a possible DDOS attack vector on the validation service (ie, a brute force becomes a magnified reflection attack).

They aren't going to store a big database of valid credit card numbers so they can return someone else's card number, they'll just generate a random number that looks like it could be a real credit card number and passes the checksum test.

Yes, a criminal could take the credit card numbers from each decryption attempt and test them, but if he's willing to test millions of card numbers to look for a valid one, he could just generate the card numbers directly and not attempt the decryption in the first place.

Re:interesting idea (3, Informative)

aviators99 (895782) | about 9 months ago | (#46103287)

The criminal doesn't care, as long as their goal is met (get a valid card - it doesn't have to be yours). If we're talking about "invalid" data, then we need some mechanism to validate the generated data before it's returned.

If you are worried about a random credit card generating algorithm generating real credit card numbers via this method, you should be just as worried about attackers using the same random number generator on their own!

Re:interesting idea (0)

Anonymous Coward | about 9 months ago | (#46104211)

we need some mechanism to validate the generated data before it's returned

Software known to return only bogus numbers can be used to generate the universe of bogus numbers, leaving only valid numbers un-generated.

Re:interesting idea (1)

bws111 (1216812) | about 9 months ago | (#46103847)

Help me understand this. If the 'wrong' results are truly random data that happens to look correct (as decrypting with the wrong key should be), then how does a match imply that a chosen password was weak? If the data is random isn't it equally as likely that any string would come up as a possible password? Why would a 'weak' password be more likely to come up than a 'strong' password?

Also, what would be the problem if the random password does match some elses? If your password is 'xyz', and I try password 'xyz' on my userid, it doesn't magically give me access to your account.

Re:interesting idea (1)

hawguy (1600213) | about 9 months ago | (#46104015)

Help me understand this. If the 'wrong' results are truly random data that happens to look correct (as decrypting with the wrong key should be), then how does a match imply that a chosen password was weak? If the data is random isn't it equally as likely that any string would come up as a possible password? Why would a 'weak' password be more likely to come up than a 'strong' password?

Also, what would be the problem if the random password does match some elses? If your password is 'xyz', and I try password 'xyz' on my userid, it doesn't magically give me access to your account.

I think you'll have to go back to the post before me, he's the one that said:

Would have to be careful that the 'bad' results do not do things like open the lock though. For instant in the case of login list breaches.

Statistically speaking, if a randomly (or pseudorandomly) computed string matches someone else's password, then his password was not safe in the first place. A weak password is more likely to come up by an algorithm that's generating "plausible" passwords than a strong one, because a weak password is weak because it's easy to guess. That could be because it has a small keyspace (i.e. a 4 digit PIN only has 10,000 possible choices, so any random guess has a 1 in 10000 chance of being right), or because it suffers some other weakness than can be exploited (i.e. if the PIN is known to be a MMDD date, then there are only 366 possible choices).

That's what makes a password weak - it's easy to guess.

Re:interesting idea (1)

bws111 (1216812) | about 9 months ago | (#46104257)

Yes, I understand that an easy to guess password is weak. What I don't understand is the assertion that a randomly appearing password indicates that the password is weak. A randomly appearing password has equal chance of coming up 'd$a8%zyq' as it does 'password'. If my password was in fact 'd$a8%zyq' why would it be considered weak?

The decrypter is not generating 'plausible' passwords, it is generating 'possible' passwords. If someone is trying to break in using weak passwords they are not going to be randomly generating strings and looking for weak ones and trying them. However, if someone is trying to crack encryption by brute forcing all of the possible keys, then a whole bunch of results can be discarded immediately not because the results don't look plausible, but because the results are not possible (ie they contain things that can not be in a password). That is what this is trying to fix. This is not trying to fix 'weak' passwords, it is trying to make it harder to decrypt a database of passwords, regardless of how strong the passwords in it are.

Also willing to take the NSA paycheck? (1)

bulled (956533) | about 9 months ago | (#46102767)

Great so long as he isn't also willing to take the NSA paycheck.

Security through obscurity (1)

torstenvl (769732) | about 9 months ago | (#46102805)

I guess it DOES have some benefit, huh?

Re:Security through obscurity (3, Insightful)

hawguy (1600213) | about 9 months ago | (#46102943)

I guess it DOES have some benefit, huh?

People misunderstand what "security through obscurity" means. Most (all?) encryption relies on security through obscurity at some level.

Hiding your house key under a loose floorboard in your back deck is the kind of security through obscurity that can really work, assuming that there are no other clues that lead to the hiding place. However, hiding the prybar that you use to pry up the floorboard under the belief that hiding the method of access makes your key safer is not the kind of obscurity that works because if the attacker can find your hiding place, he can figure another way to get to the key.

Similarly, hiding or not writing down your password is security through obscurity that works. But trying to hide the implementation details of your cipher algorithm does not, because cryptoanalysis can break your encryption even without access to your encryption algorithm.

So, obscuring your real password among an endless number of fake passwords is the kind of obscurity that can work -- even if the attacker knows that your password is somewhere among the billions of fake ones, unless he has some clue to tell him what your real password looks like, just knowing that fakes are there doesn't help him.

Re:Security through obscurity (2)

CanHasDIY (1672858) | about 9 months ago | (#46103053)

So, obscuring your real password among an endless number of fake passwords is the kind of obscurity that can work -- even if the attacker knows that your password is somewhere among the billions of fake ones, unless he has some clue to tell him what your real password looks like, just knowing that fakes are there doesn't help him.

Like hiding a needle in a needlestack.

I, for one, like the concept, and am anxious to see what impact it could have on modern cryptography.

Re:Security through obscurity (1)

pepty (1976012) | about 9 months ago | (#46104037)

If an attacker used software to make 10,000 attempts to decrypt a credit card number, for example, they would get back 10,000 different fake credit card numbers. “Each decryption is going to look plausible,”

This isn't done already???? I thought this was done all along with passwords and other short strings. I could see it being more difficult when the data is human generated passwords due to the bias in selecting words>syllables>numbers>punctuation, but still.

Re:Security through obscurity (1)

hawguy (1600213) | about 9 months ago | (#46104511)

If an attacker used software to make 10,000 attempts to decrypt a credit card number, for example, they would get back 10,000 different fake credit card numbers. “Each decryption is going to look plausible,”

This isn't done already???? I thought this was done all along with passwords and other short strings. I could see it being more difficult when the data is human generated passwords due to the bias in selecting words>syllables>numbers>punctuation, but still.

No, it's not normally done today. Generally if you try to decrypt a file using the wrong decryption key you'll either get random looking data, or no data at all.

Returning random data is not the same as returning a plausible (but incorrect) password since random data will include all sorts of non-printable characters that few users would include in a password. LIkewise, credit card numbers follow a set pattern with known prefixes and a checksum so an attacker could quickly weed out invalid numbers.

Re:Security through obscurity (1)

achbed (97139) | about 9 months ago | (#46103095)

I guess it DOES have some benefit, huh?

People misunderstand what "security through obscurity" means. Most (all?) encryption relies on security through obscurity at some level.

Hiding your house key under a loose floorboard in your back deck is the kind of security through obscurity that can really work, assuming that there are no other clues that lead to the hiding place. However, hiding the prybar that you use to pry up the floorboard under the belief that hiding the method of access makes your key safer is not the kind of obscurity that works because if the attacker can find your hiding place, he can figure another way to get to the key.

Similarly, hiding or not writing down your password is security through obscurity that works. But trying to hide the implementation details of your cipher algorithm does not, because cryptoanalysis can break your encryption even without access to your encryption algorithm.

So, obscuring your real password among an endless number of fake passwords is the kind of obscurity that can work -- even if the attacker knows that your password is somewhere among the billions of fake ones, unless he has some clue to tell him what your real password looks like, just knowing that fakes are there doesn't help him.

Of course, they could use the prybar to simply break a window, or pry open the door, invalidating the purpose of the hiding place entirely. So hiding the prybar, while it doesn't directly affect the hiding space, helps increase overall security of the system.

Re:Security through obscurity (1)

Alomex (148003) | about 9 months ago | (#46103193)

People misunderstand what "security through obscurity" means.

Actually chalk that one on RSA who pushed the security through obscurity meme really hard in the late 90s

Most (all?) encryption relies on security through obscurity at some level.

Of course, starting with your private key, which you should keep secret, continuing with which of your servers holds the motherload, which port you should contact, what exact version of crypto you are using (among the many considered reliable), etc. Each one of these is yet another hurdle on the way of a potential codebreaker attack.

Re:Security through obscurity (1)

Anonymous Coward | about 9 months ago | (#46103373)

No, there's a difference between 'obscure' and 'secret'.
Something which is 'obscure' is publicly visible with sufficient effort, but you're relying on people not taking that effort to keep it unknown.
Something which is 'secret' is privately held information which isn't publicly visible, even if someone is looking for it.

Your private key is a 'secret', and should be kept that way.
The selection of encryption algorithm to be used is public, but may be 'obscure' for some indeterminate period of time.

The problem is when people conflate 'obscure' with 'secret', and assume that because the 'obscure' information isn't trivially visible, that it is actually 'secret'.

Re:Security through obscurity (0)

Anonymous Coward | about 9 months ago | (#46105879)

No. The problem is that Stallman said you should not rely ONLY on "obscurity", and people ignored the word "only" and keep repeating the mis-quote.
 

Re:Security through obscurity (1)

Chemisor (97276) | about 9 months ago | (#46103213)

cryptanalysis can break your encryption even without access to your encryption algorithm

I doubt it. That may have been true back when people used substitution ciphers and encrypted plain text. Today's ciphers scramble large blocks and precompress to increase data entropy. I seriously doubt anybody but a top-notch cryptoanalyst can decrypt even the simplest attempt at a cipher from anybody who knows anything at all about cipher design.

Such a cryptoanalyst is likely to be found only at some high level government agency like the NSA and he will likely be too busy to spare any time to decrypt your inane emails to your mistress. Consequently, I would postulate that if you design your own cipher and avoid becoming the next Snowden, your data will be just as safe as if you had used AES.

Re:Security through obscurity (1)

LordLimecat (1103839) | about 9 months ago | (#46103427)

This is the "bad" sort of security through obscurity, because its sole protection is that noone will care enough to try breaking your encryption cipher. its similar to turning off wifi beaconing or using MAC authentication on unencrypted wifi.

Re:Security through obscurity (1)

Chemisor (97276) | about 9 months ago | (#46103639)

This is the "bad" sort of security through obscurity, because its sole protection is that no one will care enough to try breaking your encryption cipher.

It's not "no one", it's "no one who is able to break it". There is a big difference. When there is only a handful of people in the world who are capable of breaking your cipher, and there is no chance of them taking an interest it, I'd say your cipher is pretty damn secure.

its similar to turning off wifi beaconing or using MAC authentication on unencrypted wifi.

It is instead more similar to using a regular wooden door with a regular keyed lock to protect your house instead of a 6" thick high-strength steel vault door with an electronic lock. Define your threat before you decide on what security measures to take. If you don't, you will go bankrupt and will still get your stuff stolen in some other way. For most of us, a wooden door provides enough security because we need windows for light and can't afford the bulletproof 1"-thick ones. Likewise, most of us protect our data from regular criminals who aren't smart enough to do cryptanalysis. Against such adversaries, any cipher that has no readily available tools will do.

Re:Security through obscurity (1)

hawguy (1600213) | about 9 months ago | (#46103599)

cryptanalysis can break your encryption even without access to your encryption algorithm

I doubt it. That may have been true back when people used substitution ciphers and encrypted plain text. Today's ciphers scramble large blocks and precompress to increase data entropy. I seriously doubt anybody but a top-notch cryptoanalyst can decrypt even the simplest attempt at a cipher from anybody who knows anything at all about cipher design.

Such a cryptoanalyst is likely to be found only at some high level government agency like the NSA and he will likely be too busy to spare any time to decrypt your inane emails to your mistress. Consequently, I would postulate that if you design your own cipher and avoid becoming the next Snowden, your data will be just as safe as if you had used AES.

Which is how we end up with things like the weak Zip File [microsoft.com] and early MS-Office [iacr.org] encryption. Companies think they can roll their own, or take shortcuts and end up with weak security.

Published algorithms have withstood scrutiny by actual experts, don't assume that your home-grown super-secret encryption will stand up to scrutiny - it may leave patterns in the data that can be exploited to decypt it [cryptochallenge.com]

Re:Security through obscurity (1)

Chemisor (97276) | about 9 months ago | (#46104629)

Which is how we end up with things like the weak Zip File and early MS-Office encryption. Companies think they can roll their own, or take shortcuts and end up with weak security. Published algorithms have withstood scrutiny by actual experts, don't assume that your home-grown super-secret encryption will stand up to scrutiny

Funny you mentioning Zip and Office encryption. Neither of those ciphers is broken. If you read the papers you are linking to you'll find that the zip attack exploits its byte-by-byte CBC mode. With only a byte, dependencies between sequential bytes can be put into a solvable matrix. Expanding the block to even 4 bytes would make this attack infeasible. Office encryption break likewise exploits the CBC weakness, due to Office reusing IVs. The cipher, RC4, happens to be one of your published algorithms. This just illustrates that the cipher is only one part of any cryptosystem, and the way you use it also matters. If you know enough to make your blocks large enough, like 16 bytes, and are aware that IVs need to be unique, there is no reason you couldn't design your own secure cipher. Cryptographers are not supergeniuses. All it takes is some attention to detail.

Re:Security through obscurity (0)

Anonymous Coward | about 9 months ago | (#46103673)

The problem with using a secret encryption algorithm is more that the algorithm has to be shared in order for the scheme to be used.

It's essentially the same problem with using account number to validate a bank transaction or social security number as a password.

Re:Security through obscurity (1)

dcollins (135727) | about 9 months ago | (#46103801)

I think perhaps a clearer line to draw would be: Good crypto-systems are based on an algorithm which is known publicly (and can be assessed for strength), plus a secret key which is easily alterable (in case of a leak, new or removed partner, etc.) Thinking that you can keep secret part of the fixed, unchangeable infrastructure is the mistake that "security through obscurity" is meant to warn against. Murkiness about which part is which is bad.

Re:Security through obscurity (0)

Anonymous Coward | about 9 months ago | (#46107545)

Most (all?) encryption relies on security through obscurity at some level.

Wrong. The well known modern crypto algorithms all rely upon the difficulty of solving selected computational problems, generally ones that are known to be hard. In this case hard means that there exist no generally known algorithms that solve them in an amount of time polynomial to the size of the keys. Thus, for even a modest sized key it would require a ridiculous amount of time, generally not substantially better than brute force (trying all possible keys one by one), to solve the problem and recover the data. In fact, solving these problems by brute force or best known computations is so time consuming and difficult that even most governments don't generally bother because it's far easier to steal your keys by other means, ranging from bugs or surveillance to the old rubber hose (depending upon where in the world one goes).

Re:Security through obscurity (0)

Anonymous Coward | about 9 months ago | (#46107891)

Similarly, hiding or not writing down your password is security through obscurity that works. But trying to hide the implementation details of your cipher algorithm does not, because cryptoanalysis can break your encryption even without access to your encryption algorithm.

The majority of successful attacks isn't from people who knows cryptoanalysis but from script-kiddies who have read up on known flaws. Rolling your own encryption, even if it's just a ROT14 algorithm, is probably going to be more effective unless a government targets you specifically.
Heck, if people rolled their own retarded encryption then the automatic farming that NSA does wouldn't be viable since they would actually have to decide who is interesting and do an analysis then.

If you are targeted by a cryptoanalyst then it is very likely that he is on the payroll of someone with enough resources to gain man-in-the-middle access, hire a locksmith, look up when you are/aren't at home, buy off a judge to get a warrant and so on. Physical access to your computer will be impractical but possible.

Fake info generation to stop intrusive phone apps (5, Interesting)

Animats (122034) | about 9 months ago | (#46102853)

I'd been looking into this in a slightly different context. Recently, at Hacker Dojo, someone demonstrated an Android mod to me which dealt with applications that demand too many privileges. It has the usual "disable privileges" option, but for apps that won't run with privileges disabled, it sends fake info.

The demo showed generation of fake phone serial numbers and such. That's easy. Apps that improperly try to upload your address book, though, require generation of a plausible, but fake, address book. That's wasn't in the demo, but it's worth doing. Location data should probably be sent as a random walk from some random starting point.

If enough people do this, it will garbage marketing databases.

Re:Fake info generation to stop intrusive phone ap (1)

geminidomino (614729) | about 9 months ago | (#46102955)

Are you talking about [Open]PDroid? I ask because if there's another mod that does the same thing, I might want to look into it. :)

Re:Fake info generation to stop intrusive phone ap (5, Informative)

sayno2quat (1651749) | about 9 months ago | (#46103023)

There is XPrivacy [xda-developers.com] , which uses the XPosed framework [xda-developers.com] . That doesn't disable permissions, but rather sends fake data to the app.

Re:Fake info generation to stop intrusive phone ap (1)

geminidomino (614729) | about 9 months ago | (#46103077)

Oh, that looks interesting. If my Nexus 5 ever gets here (stupid winter storms...) I'll definitely want to be read up on that.

Thanks!

Re:Fake info generation to stop intrusive phone ap (1)

tillerman35 (763054) | about 9 months ago | (#46103061)

I do lots of similar work when generating test data. It's pretty common to have libraries for things like "make up a plausible address" or "randomly generate a credit card number." Extreme cases can generate whole narratives, even intentionally injecting spelling and grammar errors at varying rates in order to fool packages that use lexical analysis to detect robot text.

The work that spammers have done attempting to fool email filters is probably directly applicable to this effort.

Re:Fake info generation to stop intrusive phone ap (1)

flyingfsck (986395) | about 9 months ago | (#46103439)

Verbing of nouns weirds the language.

Re:Fake info generation to stop intrusive phone ap (1)

dcollins (135727) | about 9 months ago | (#46103823)

Because awesome.

How does the decrypter know what to send out? (2)

erice (13380) | about 9 months ago | (#46102877)

If the software is detecting that the key is bad then all the attacker has to do is use software that doesn't do this. This assumes that the attacker has direct access to the file. If not, then well known throttling techniques apply and the new wrinkle doesn't buy much.

Making bogus data come out without requiring specific software for decryption seems like a very hard problem. Every data type will need, not just unique software but unique encryption algorithms that are both secure and not trivial extensions to known algorithms.

Re:How does the decrypter know what to send out? (0)

Anonymous Coward | about 9 months ago | (#46103043)

The only way I can think of doing it is by, say, storing data within the file that has a bunch of random keys long before the actual key.
So, all of those will eventually be decrypted as well.
It would be similar to hidden containers with Truecrypt, except they are assigned random garbage not to be used, and equally similar to the Owner-Free File System where no actual files of worth exist unless you have the "layout" file that actually assigns all the file chunks in to an actual useful file.
This layout file would be the correct key.

This method would have both an advantage and disadvantage at the same time, it would increase the size of the file by a bit.
This would hide the possible datas real size, which could be guessed by knowing how the encryption works.

It could be very very useful for strong passwords on things like banking websites.
Equally it would be useful as a method to CATCH people trying to hack accounts.
Remember Ghost Bans? How about Ghost Logins?
You make them THINK you logged in to the correct account with say, financial information, balances, etc. They try to do stuff with a fake account, bam, caught.
Just make sure to make the logged in user KNOW they logged in. You KNOW some moron is going to log in, walk away, come back and try do something with a ghost account and get a knock on the door... I guess you could always keep a successful login notice on screen until clicked away. Or have it in big huge letters on a dashboard.
Stuff like that would make hacking considerably harder for people because they'd be attacking / abusing ghost accounts.
You could even use SOME information that a person would have to know to get in to the account, such as maybe an email or real name, to trick them even further in to thinking they are hacking their target.
That'd be hilarious to watch on server logs.

Re:How does the decrypter know what to send out? (0)

Anonymous Coward | about 9 months ago | (#46103057)

Think of it this way lets say I am protecting a set of phone numbers.

They follow a fixed set of rules.

Lets say my phone number is 858-123-4567

I put in key xyz and I get my phone # back.
However, if I put in key abc I get a *different* phone number back say 777-857-4528.

Both *look* like phone #s. There is no way to tell if it is a real number or fake one. This would only work for very *narrow* cases.

So even if I brute force the keys I can not say reasonably that I am getting the right phone numbers. The attacker could use other side channel facts though for example 555 is typically a fake number. Or you know the address of someone they probably would not have a area code outside of where they live (they still can).

This could be a useful technique but only in very narrow cases.

Re:How does the decrypter know what to send out? (0)

Anonymous Coward | about 9 months ago | (#46103743)

That doesn't even seem so narrow - it's brilliant, leaving nothing but entropy in the data!

Compression could do this (1)

erice (13380) | about 9 months ago | (#46103517)

Bad form, I know, to respond to your own posting. But it occurs to me that data specific compression could accomplish the goal. Credit card numbers have structure. If you create a mapping between only valid card numbers and the minimum number of bits then encrypt that then it doesn't matter how the data is decrypted. It always produces valid looking credit card numbers. The catch, though, is that the bit mapping needs to be exact. If the total possible credit card numbers is not a power of 2 then there will always be decryption failures that produces detectably invalid results.

Re:Compression could do this (1)

Anti-Social Network (3032259) | about 9 months ago | (#46104997)

Sounds to me this is more of an approach rather than a specific implementation. TFA talks about specific data types, such as credit card numbers and passwords. Reading between the lines, it seems like something that would be set up with input from a knowledgeable system administrator or hard-coded for a specific purpose; password manager is specifically mentioned.

So you write this program such that the data type information is not part of the encrypted data but explicitly provided as (for instance) a map that corresponds to valid password characters. After the algorithm is run on the encrypted data, you simply write the computed output to an integer value, and convert to ASCII using the aforementioned map (or, as you've mentioned, compression scheme). Similar methods are used to scale certain random number generating functions to any particular number range. This way, any binary dataset can be converted to text, but whether it's the real data or not is impossible to guess because it's by definition valid ASCII text. You're then free (as the user) to XOR the raw binary with whatever key your algorithm produces based on the master password typed by the user in order to produce the stored value.

Since I am not an expert in this field, the fact that it seems pretty trivial to me probably means either it's not new, and therefore not newsworthy, or there's some detail here that makes it special in some arcane way or niche application.

2 easy ways - encrypt fake data or printf(decrypt) (1)

raymorris (2726007) | about 9 months ago | (#46103561)

> Making bogus data come out without requiring specific software for decryption seems like a very hard problem.

I can think of ways right off. First, you can encrypt decoy data along with the actual plaintext. It's not that the decryption software CREATES the decoy data, the decoy is already there. Decryption uses part of the key to separate the real data from the decoy.

Given that the data format is known, there's another easy way - packing. Assume the proper format is like social security numbers:
000-00-0000

What you want to avoid is letting the attacker recognize from the format that 571-22-5557 is correct, while jdksfs8fgh is not. To do that, before encryption, pack the data. That means you strip out the hyphens and save the numbers as one byte per pair of digits (or even 6 bits per digit). So the actual decryption returns unformatted, random looking bytes. Those are then formatted using something like printf() to make the actual source plaintext. If the right key is used, they'll be unformatted bytes until printf is run. If the wrong key is used, again you get unformatted bytes until printf formats them. The output comes out in the same format whether the key is correct or not. In incorrect key will result in invalid data that looks just like valid data.

Re:2 easy ways - encrypt fake data or printf(decry (0)

Anonymous Coward | about 9 months ago | (#46105039)

One problem is if birth dates are available. One could check if the SSN would have been issued that birthday. Probably doable even if separate keys where used for the birth dates and SSNs but it would slow and complicate the attack.

Re:How does the decrypter know what to send out? (1)

logicnazi (169418) | about 9 months ago | (#46107861)

It will only work for data that is so well characterized you can find the information theoretic optimal representation for it, i.e., you can bijectively map each message onto the integers mod n so that each integer is equally likely to be seen.

Other than CCNs I can't think of much which satisfies this condition.

I guess it could help...but..... (0)

Anonymous Coward | about 9 months ago | (#46103009)

How does the correct user who enters an incorrect password recognize his mistake? Could be funny

A comedy of errors coming to a computer near you.....

fake data 1 / 2,000 incorrect keys (2)

raymorris (2726007) | about 9 months ago | (#46104125)

One way we do it is to return a "fake" only occasionally. The person who gets their password wrong is very unlikely to see a fake. On the other hand, a bad guy who is trying out 100,000 possible keys will get 50 fakes.

This works especially well if the bad guy doesn't know it's designed to occasionally generate fakes. He thinks he actually did decrypt passwords, but the list he has is nolonger valid. Maybe it's out of date, he thinks, or maybe they are stored backward, or maybe we KNOW he stole the list and therefore we've changed all of the passwords. It was entertaining to read the cracker message boards when we first introduced that feature.

Now, the crackers who keep themselves informed know that we generate fakes, and it annoys them greatly. They don't yet know that we do TWO levels of fakes. A certain percentage of the fakes pass their extra level of checking they now have to do to weed out the fakes. In other words, they THINK they are weeding out the fakes, but they are actually only weeding out the level 1 fakes.

Re:fake data 1 / 2,000 incorrect keys (0)

Anonymous Coward | about 9 months ago | (#46105535)

They don't yet know that we do TWO levels of fakes.

Well, they do now.

Similar to deniable encryption... (1)

Vexler (127353) | about 9 months ago | (#46103271)

The goal stated in the article is similar to that of deniable encryption. Whereas "honey encryption" works through a piece of dedicated software, deniable encryption works by constructing a block of ciphertext in such a way that different plausible plaintexts can be recovered depending on which symmetric key is used for decryption. Of course, only the user knows how many different plaintexts are actually buried in the ciphertext, and under duress (rubber hose, point of a gun, etc.) he can relinquish the non-incriminating plaintext and claim innocence.

Why they do not protect the encryption patern? (1)

indybob (2731135) | about 9 months ago | (#46103375)

When they create an encryption program they normally use a precise type of algorithm example: RSA128, RCx, IDEA, etc. When hacker want to decrypt a data file with brute force. They already know the encryption method used. So they can use another "home" program with the same algorithm instead of the "official program". So why they do not use more then one algorithm? With modern computer power, they can use many different type of cryptography algorithm in an "random" sequences! Example : Plain Data --> AlgoA --> AlgoR --> AlgoB --> AlgoZ --> AlgoT --> AlgoW --> AlgoA --> AlgoB --> AlgoG --> AlgoK --> encrypted data. The user have 2 thing to remember, The "key" and the "algorithm sequence" (in this case "A+R+B+Z+T+W+A+B+G+K") So, if you do not have the right sequence of algorithm used. You multiply the computing time to decrypt (brut force) the stolen data.

Not TOO easy a fake (0)

Anonymous Coward | about 9 months ago | (#46103379)

If he wants verisimilitude he should program it to cough up the plausibility only after 'X' (large random number) tries.

Lorem Ipsum (1)

flyingfsck (986395) | about 9 months ago | (#46103487)

Fake data is an old idea: "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."

Hey, the 90s are calling! (0)

Anonymous Coward | about 9 months ago | (#46103593)

Great idea! Except that it looks exactly like what professional crackpot David A. Scott used to propagate endlessly on sci.crypt back in the late 90s ...

validity test? (0)

Anonymous Coward | about 9 months ago | (#46103601)

So let me get this straight... The brute force password decryption yields plausible looking passwords on every failed attempt... so, that means the brute force attacker would require some form of test to determine if the plausible passwords are correct, like, say, using said password to log in to it's destination account?

"Honey Encryption" (1)

Hugh Pickens DOT Com (2995471) | about 9 months ago | (#46103921)

"Honey Encryption" to Bamboozle Attackers with Fake Secrets

Tom Simonite writes at MIT Technology Review that security researcher Ari Juels says that trickery is the missing component from the cryptography protecting sensitive data and proposes a new encryption system with a devious streak. It gives encrypted data an additional layer of protection by serving up fake data in response to every incorrect guess of the password or encryption key [technologyreview.com] . If the attacker does eventually guess correctly, the real data should be lost amongst the crowd of spoof data. The new approach could be valuable given how frequently large encrypted stashes of sensitive data fall into the hands of criminals. Some 150 million usernames and passwords were taken from Adobe servers [theguardian.com] in October 2013, for example. If an attacker uses software to make 10,000 attempts to decrypt a credit card number, for example, they would get back 10,000 different fake credit card numbers. "Each decryption is going to look plausible," says Juels. "The attacker has no way to distinguish a priori which is correct." Juels previously worked with Ron Rivest, the "R" in RSA, to develop a system called Honey Words to protect password databases by also stuffing them with false passwords [mit.edu] . Juels says that by now enough password dumps have leaked online to make it possible to create fakes that accurately mimic collections of real passwords and is currently working on creating the fake password vault generator needed for Honey Encryption to be used to protect password managers. This generator will draw on data from a small collection of leaked password manager vaults, several large collections of leaked passwords, and a model of real-world password use built into a powerful password cracker. "Honeywords and honey-encryption represent some of the first steps toward the principled use of decoys [jhu.edu] , a time-honored and increasingly important defense in a world of frequent, sophisticated, and damaging security breaches."

AWESoME FP (-1, Redundant)

Anonymous Coward | about 9 months ago | (#46103949)

How does it hold up under an oracle? (0)

Anonymous Coward | about 9 months ago | (#46104093)

I just want to know... if this "random decryption" holds up to an attacker's anonymous, single use, non-random oracle...

Sure, if I compromise the database, and every decryption attempt looks valid...I'm going to have a hard time.

But I'm a competent attacker-- presumably I'm free to submit my, or someone else's legit CCard into the system, or cross reference a single (or more) known good cards.

Once I hack the whole database, I now have an oracle to verify the correct decryption against. Unless there's a different key for every entry, I know just have to look up my known good card and it's game over to see if the decryption worked.

Ezmo pubmlic tuiolo vueok (1)

Tablizer (95088) | about 9 months ago | (#46104299)

Troffle blent murper humph flempto gretch fooma irf pwenty eb wertoo bakk empo flbe ilffy fez mulok.

Another NSA backed initiative (0)

Anonymous Coward | about 9 months ago | (#46104879)

The NSA realises that most people who use encryption have a very poor grasp of logic, and can therefore be carefully fooled into moving from good security practises and towards systems fully compromised by NSA activity. Call this the FUD propaganda method.

An example. The NSA funded pseudo-science papers, and paid to have these papers published in peer-reviewed journals, stating that erased data on hard-drives could be recovered by sufficiently expert forensic labs. Such claims were utterly laughable. Essentially, the NSA had their agents claim that the capacity of any HDD was effectively infinite, because after wiping the current data with a new set of random data, both the new data AND the original data could be recovered.

To an alpha, such nonsense is hysterically funny, but to most betas, the idea of NSA level personnel having some magic 'magnetic surface scanner' 'recreating' all that erased data seems plausible. The goal of the NSA was to discourage people from using the ONLY safe form of erasing procedure (complete over-write with random data), and instead to rely on tools from companies like Microsoft, Norton etc., that only APPEAR to erase data, but actually either simply flag the data, or use a procedure that allows the firmware of the HDD to effectively ignore the erase instruction.

ANY 'magic' complicated security procedure from people with long associations with the NSA (like Ari Juels) is a simple obvious NSA ploy. The NSA has been paying a LOT of money to have Truecrypt attacked in print recently. No brute force attack can work against Truecrypt if a decent password phrase is chosen- so what is this nonsense about wanting 'security' if a brute-force attack cracks the password? And if a successful attack CANNOT recognise the decrypted data as the data desired, you have what is known as DOUBLE encryption (an encrypted container within an encrypted container), which against the 'common sense' of betas, is actually a highly flawed method to safely encrypt data.

PS the container within a container method used sometimes by Truecrypt users is NOT an example of double encryption, but an example of attempting to hide the use of encryption- a very different concept.

For those interested... (0)

Anonymous Coward | about 9 months ago | (#46107613)

Here is an open conference scheduled next week by Ari Juels in Lausanne, CH:

The event [memento.epfl.ch]
 

Very narrow use (1)

logicnazi (169418) | about 9 months ago | (#46107849)

The only cases in which this approach can work are those where the distribution of plaintext is known in advance.

Since the algorithms used to generate CCN are largely public one can map the class of apparently valid CCNs (suppose it has n members) bijectively into the integers mod n and assuming the CCNs are uniformly distributed over the apparently valid CCNs (likely) their images in the integers mod n are uniformly distributed. Assume that ENC_k is any standard encryption function (public or private) with key k operating on inputs from the integers mod z >= n-1 (usually a power of 2 for a symmetric encryption function). Given a CCN c map it to a value c' in mod n arithmetic and generate a random value 0 = r z. We can now encrypt our CCN as the pair r + c' mod n, ENC_k(r).

This will ensure that no statistical test will be able to distinguish a correct choice of the key from an incorrect one.

This is useless, however, for data like english text or names which don't have an easily describable distribution. The construction above relied on our ability to select an information theoretic optimal compression function for CCNs, i.e., one which bijectively maps the message into a uniform distribution on the integers mod n for some n. This is impossible for things like proper names or english text.

At last someone is thinking (0)

Anonymous Coward | about 9 months ago | (#46108033)

About time something like this started to appear .. ..

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?