×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

99.8% Security For Real-World Public Keys

Soulskill posted more than 2 years ago | from the what's-.2%-among-friends dept.

Encryption 108

An anonymous reader writes "If you grab all the public keys you can find on the net, then you might expect to uncover a few duds — but would you believe that 2 out of every 1000 RSA keys is bad? This is one of the interesting findings in the paper 'Ron was wrong, Whit is right' by Lenstra, Hughes, Augier, Bos, Kleinjung and Wachter. Quoting from the paper's abstract: 'We performed a sanity check of public keys collected on the web. Our main goal was to test the validity of the assumption that different random choices are made each time keys are generated. We found that the vast majority of public keys work as intended. A more disconcerting finding is that two out of every one thousand RSA moduli that we collected offer no security. Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for "multiple-secrets" cryptosystems such as RSA is significantly riskier than for "single-secret" ones such as ElGamal or (EC)DSA which are based on Diffie-Hellman.'" For a layman's interpretation of the research, the NY Times has an article about the paper. Update: 02/15 01:34 GMT by S : Security researcher Dan Kaminsky has commented on the paper, saying that while the survey work itself is good, it doesn't necessarily support the paper's thesis. He writes, "On the most basic level, risk in cryptography is utterly dominated, not by cipher selection, but by key management. The study found 12,720 public keys. It also found approximately 2.94 million expired certificates. And while the study didn’t discuss the number of certificates that had no reason to be trusted in the first place (being self signed), it did find 5.4M PGP keys. It does not matter the strength of your public key if nobody knows to demand it."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

108 comments

No security at all...? (5, Interesting)

icebike (68054) | more than 2 years ago | (#39039599)

Quoting from the NYT article:

They were able to produce evidence that a small percentage of those numbers were not truly random, making it possible to determine the underlying numbers, or secret keys, used to generate the public key.

This is a far cry from "no security at all" if I understand it correctly. Any email encrypted with those keys would still be encrypted. And Joe Random Lurkerr would not be able to decrypt it even if he did intercept it.

However Random Monitoring Agency might amass enough such emails to make a guess at the random number used key generation. You have to have a fairly good sized pool of keys to work from in order to figure out the keys of any single encryption.

The paper goes on to state:

Cryptosystems such as RSA that require, during key-setup, multiple secrets are more aaffected by the apparent difficulty to generate proper random values than systems such as Diffe-Hellman (cf. [8]), ElGamal, and (EC)DSA that require a single secret. For either type of system identical keys can be exploited only by parties that can be identified by searching for the owner of the duplicate. But for the former (RSA) partially identical choices allow any malcreant to commit fraud.

For some values of "Any". You still need a significant number of such RSA keys in which to search for the use of duplicate random numbers.

So DSA keys are safer it would seem.

Re:No security at all...? (5, Insightful)

Spykk (823586) | more than 2 years ago | (#39039697)

If what that quote says is true and you could derive the secret key from the public key then one could say that the key is worse than no security at all. Public keys are, by definition, public. They are generally available to the public at large on keyservers like http://pgp.mit.edu./ [pgp.mit.edu] You wouldn't need to intercept any messages because you could use the public key to encrypt any number of examples. The false sense of security presented by encrypting something with one of these flawed keys would make them very dangerous indeed.

This needs a car analogy! (5, Insightful)

khasim (1285) | more than 2 years ago | (#39039871)

Suppose someone checks thousands of cars and finds that 998 out of every thousand cars checked had good, working brakes.

But 2 out of every thousand cars checked had bad brakes.

Is the braking system on cars broken?

Or do we need to find out how and why those particular cars have problems? I vote for this one.

Re:This needs a car analogy! (3, Insightful)

karnal (22275) | more than 2 years ago | (#39040057)

But at some point you want to fix the source, as it's cheaper and easier from a security (and a physical) perspective to fix it there.

To take your example into account - say Toyota is building a line of Camry cars out of building X. 998 of 1000 have good brakes. They are interested in fixing the cars that are broken, but they're also going to launch an investigation as to why they're broken in the first place. In this case, could be a bad supplier shipping slightly out of spec parts; could be a worker on the line who is dissatisfied with his/her job. That's where fixing the system comes into play - if the system works as it should, then there's no cars to fix (in an ideal world - and that's what we try to get to with security as well.)

Re:This needs a car analogy! (5, Informative)

martin-boundary (547041) | more than 2 years ago | (#39040287)

But the situation looks different from an attacker's point of view. The attacker sees 2/1000 guaranteed easy targets, ie it's a fishing expedition which costs about 1000 tries for about 2 successes, so he can budget for the processing power etc.

Moreover, the attacker doesn't care if the ultimate cause of the security failure is the manufacturer or the user or some freak lightning storm. All he cares is that statistically 2/1000 are guaranteed.

Re:This needs a car analogy! (1)

Anonymous Coward | more than 2 years ago | (#39042143)

Mod parent up. This is something which is important to be understood by IT personnel. I only understood it when some people started to systematically steal accounts with weak passwords on out site. It was not a problem for the vast majority of people with weak password, because there was a relatively small chance that they became affected. It was a problem for us, because any malicious people, maybe even without scripting, just trying manually, was able to guess the password of one from every 20 account. Because they had a guarantee that they could steal some accounts with a small effort, they indeed started to steal accounts in large volumes.

Yep. But that's my point. (1)

khasim (1285) | more than 2 years ago | (#39040323)

But at some point you want to fix the source, as it's cheaper and easier from a security (and a physical) perspective to fix it there.

They've identified that there MAY be a problem.

But they haven't identified the source of the problem.

Is it a certain make/model of Toyota?
Is it a certain rotor from a manufacturer that is in different makes/models?
Is it user error?

Until they identify the source (sources?) of the problem then they cannot say that X is "broken". Because X seems to work in 998 out of 1,000 cases.

So what is the difference between the 998 cases and the 2 cases?

Re:Yep. But that's my point. (1)

karnal (22275) | more than 2 years ago | (#39040477)

This is why I specifically stated one model of car (Toyota Camry) as to not muddle the field with different part #s etc. And I only threw out suggestions as how to fix, seeing as I don't know what's broken in either case.

What's the difference? Well, what's the difference in the 2 encryption cases from a security perspective? Don't know the answer to either I'm afraid.

Re:Yep. But that's my point. (0)

Anonymous Coward | more than 2 years ago | (#39042169)

They probably have. But for obvious reasons, they can't tell until the concerned vendors have fixed their implementations and notified their customers...

Re:This needs a car analogy! (1)

Magada (741361) | more than 2 years ago | (#39042639)

Far less than 2/1000 Ford Pintos failed catastrophically.

Re:This needs a car analogy! (1)

swb (14022) | more than 2 years ago | (#39045281)

Since the risk of car accidents generally is pretty high, was the Pinto really a risk, or was it more media perception?

Re:No security at all...? (4, Insightful)

ceoyoyo (59147) | more than 2 years ago | (#39039935)

Worse, the attacker could sign things that looked like they came from you.

Re:No security at all...? (2)

muckracer (1204794) | more than 2 years ago | (#39044803)

> Worse, the attacker could sign things that looked like they came
> from you.

Hey, I wrote that!! :-O

Re:No security at all...? (2)

uncmathguy (936555) | more than 2 years ago | (#39039759)

Surely if the authors found these useless keys, a bad guy could too, using the same publicly available lists of keys. Or am I missing something? Once the bad guys know which keys are bad, anything "secured" with those keys would be vulnerable. Sounds like "no security at all" is about right. Of course people who don't want to go through the few hours it took the authors to search the public keys would still not be able to access your transmissions, but those aren't the people for which the security is put into place.

Re:No security at all...? (2)

Omnifarious (11933) | more than 2 years ago | (#39039815)

Flaw in random number generator + euclid's algorithm = known factors for public keys = totally broken public key.

Think about it. A flaw in random number generation may well result in several people independently picking the same factor for their public key. Just run euclid's GCD algorithm on all pairs of public keys, which is O(n^2 * m) where n is the number of keys and m is their average length. Poof, all the ones that managed to 'accidentally' share a factor with another one pop out with their factors since a public key is just two big prime numbers multiplied together. Game over for those keys.

Re:No security at all...? (1)

Omnifarious (11933) | more than 2 years ago | (#39039877)

And, in case you're still confused, this means that anything that was encrypted with this public key may now be decrypted. It also means that any signature it's ever made is now suspect as anybody who knew about this problem could've made that signature.

Of course, if someone is a protocol that implements forward secrecy and just using the RSA key to sign a diffie-helman exchange and then using the resulting key from that to encrypt their communications with a block cipher, they might be safe. Of course, the same bug might result in predictable diffie-helman keys too.

But any of those conversations may still have had a man-in-the middle.

Re:No security at all...? (2)

postbigbang (761081) | more than 2 years ago | (#39041003)

Pseudo-random number generation is a problem that's been seen before. That numerous machines can be at a point where their seed is odd means that an additional factor, like a time-date hash, could re-randomize the key, thus reducing the attack set.

I wonder if it's specific to one platform or another....

Re:No security at all...? (3, Insightful)

TheRealMindChild (743925) | more than 2 years ago | (#39041415)

This isn't about psuedo-number generation, it is about using a seed that is easy to determine.... like 0.

The claim that random number generation using the timer/tick count can be easily guessed is, at best, misleading. Your application has no idea what services are running, what priorities they are running at, etc, and to discover those would add even more entropy to the situation as it takes even more unknown amount of system time to determine their impact

Re:No security at all...? (1)

fatphil (181876) | more than 2 years ago | (#39043077)

"... on all pairs ..."

There are tens of trillions of pairs. You're the guy they predict will take 10 years to achieve what they did in an hour.

You need to combine and conquer in order to actually get results before the certificates expire. (A back of a fag packet calculation implies that a naive GCD tree could crack them all in less than a day using GMP.)

Re:No security at all...? (1)

pla (258480) | more than 2 years ago | (#39039859)

You still need a significant number of such RSA keys in which to search for the use of duplicate random numbers.

And that limits an attacker how? Even ignoring the vast numbers of "abandoned" keys you can find with a clever Google search, you can always just make them yourself. A small army of Chinese prisoners can beat mere 500:1 odds in the time it takes most of us to drink our morning coffee.

Re:No security at all...? (3, Interesting)

icebike (68054) | more than 2 years ago | (#39039939)

Well, left unsaid is just how many trials it takes to determine if the key in question is one of those 2 in 100 that is vulnerable.
And the exact process is still not documented.

Re:No security at all...? (4, Informative)

Omnifarious (11933) | more than 2 years ago | (#39040039)

Steps:
1. You scoop up all the public keys you can find. People generally publish them. They're public keys.
2. You run GCD on each pair.
3. You find they share a common factor and you win! Both keys are now completely and totally compromised. You know the secret key for both of them.
4. Or... you find they share a common factor of 1. Oh, well, on to the next pair.

Re:No security at all...? (1)

Anonymous Coward | more than 2 years ago | (#39041945)

2. You run GCD on each pair.

For those who didn't know: the Binary GCD algorithm [wikipedia.org] only requires O((log_2 uv)^2) time in the worst case, which makes it super efficient to make this test.

However, the number of 2048-bit primes is so astronomically large (think ~10^600), that you're more likely to randomly guess the prime factors of a given key than you are to find a collision between any two RSA-4096 keys ever generated -- past, present, or future.

Re:No security at all...? (1)

pla (258480) | more than 2 years ago | (#39042921)

However, the number of 2048-bit primes is so astronomically large (think ~10^600), that you're more likely to randomly guess the prime factors of a given key than you are to find a collision between any two RSA-4096 keys ever generated -- past, present, or future.

Very true, with one slight problem - We have no way to actually produce (non-trivial) guaranteed 2048-bit primes. Instead, in the real world we use likely primes via a combination of Fermat and Miller/Rabin testing.

TFA points out that, because we don't have good enough RNGs with which to seed these increasingly complex chains of lies, their reliability drops from one failure in 10^BigNumber keys down to (in the real world) 1 in a mere 500.

Re:No security at all...? (1)

fatphil (181876) | more than 2 years ago | (#39043159)

Bollocks. Google "APRCL" or "ECPP".

And typically we have good enough PRNGs, we just don't use good enough seeds, as the seed is the only source of entropy.

Re:No security at all...? (1)

fatphil (181876) | more than 2 years ago | (#39043469)

Bollocks. Now RTFA. Random number generation in some implementations is very flawed, and your entire conclusion is demonstrably false (and Lenstra has 12000 examples as proof).

Re:No security at all...? (1)

fatphil (181876) | more than 2 years ago | (#39043147)

The problem is that you have to make flawed keys. How do you know you're using the same flawed algorithm as (some subset of) the masses? If you aren't, then there's basically zero chance of a collision. (It's effectively trial division.)

Do you take a bomb on to a plane in order to guarantee your safety? The chances of there being 2 bombs on a plane is minimal, right?

Now were you to find one factor using your technique, which cannot be pure luck, it must because you're using the same flawed algorithm used by (a subset of) the masses, then it would indeed be worth persevering, as you discover how big that subset is. But I must reiterate, you'll never even find one factor just by pure chance.

Re:No security at all...? (0)

mveloso (325617) | more than 2 years ago | (#39039885)

The article states that it's possible to determine the underlying numbers, not that they did it.

That's just like the MD5 collision problem a few years back.

The bad thing is that now that researchers have discovered this possibility there may have been someone that discovered it before and is actively exploiting it. Which is problematic, but I suspect that it's easier to compromise the back-end instead of attacking TLS directly.

Re:No security at all...? (3, Informative)

Omnifarious (11933) | more than 2 years ago | (#39040059)

They did find the underlying numbers. The article basically tells you exactly what they did. They mention Euclid's algorithm and for anybody who knows how RSA works, it's obvious what they did. And what they did would result in them discovering the underlying numbers directly.

Re:No security at all...? (3, Interesting)

petermgreen (876956) | more than 2 years ago | (#39040163)

So DSA keys are safer it would seem.

DSA has it's own problems. Most notably merely USING your key to generate a signature with a broken random number generator can be enough to reveal it to an attacker. Thankfully PGP doesn't use openssl and while it's POSSIBLE to use the same keys for ssh and pgp the impression I got is that not many people do.

http://rdist.root.org/2009/05/17/the-debian-pgp-disaster-that-almost-was/ [root.org]

China goes for the 0.02. (0)

sethstorm (512897) | more than 2 years ago | (#39039625)

Any possibility for information to be handed over to them is always worth the low odds.

Slow down (4, Interesting)

Effugas (2378) | more than 2 years ago | (#39039743)

I'm not seeing any data on what portion of those keys with bad moduli were actually attached to valid certificates.

It's damn fine survey work, but the conclusions are just strange. More detail here:

<a href="http://dankaminsky.com/ronwhit">Survey is good. Thesis is strange.</a>

Re:Slow down (2)

Z00L00K (682162) | more than 2 years ago | (#39041973)

And how can we trust the certificate authorities these days?

The current model is sensitive to security breaches at the CA:s and the general public has no control over which CA that is playing under the cover with which government.

No a math proof of its breakability. (1)

Anonymous Coward | more than 2 years ago | (#39039753)

There's not published any mathematical proof of that RSA 4096-bit is broken, so that, it's suppossed to be secure for a short time, one week, or one month, or one year, or one century.

JCPM: crackers aren't fool, they pick 100 best richest people for lottery of breaking 2 of their 100 credit cards (VISAs).

Where is this finger pointing? (5, Insightful)

kanoalani (2515446) | more than 2 years ago | (#39039803)

I didn't find any discussion of what may have caused the lack of randomness. Presumably it was a particular implementation on a particular platform of RSA key generation and presumably they know what it is. I would be interested to know too.

Re:Where is this finger pointing? (0)

Anonymous Coward | more than 2 years ago | (#39039841)

Old debian keys?

Re:Where is this finger pointing? (1)

kungfuj35u5 (1331351) | more than 2 years ago | (#39039895)

Pretty sure this is the debian bug where the relevant packages for whatever reason decided to use a static number for the rng seed. There is an xkcd which pokes fun at this.

Probably not just debian... (1)

Junta (36770) | more than 2 years ago | (#39039973)

I've seen a *lot* of people take shortcuts like feeding in a well-known arbitrary piece of data as an entropy source in a script invoking SSL utilities. They will complain that '/dev/random is too slow (implicitly not realizing the urandom option)' or 'I wanted a script that would work exactly the same in all platforms and this happened to work'. Out of a plethora of better ways to do it they happen to pick the worst because they simply fail to understand the significance of the random source.

Re:Probably not just debian... (0)

Anonymous Coward | more than 2 years ago | (#39042323)

I'm curious about this. Apparently you spend hours looking over random sysadmins' shoulders until the rare occasion where they generate a SSL key. You do this so often that you notice a few of them are using hello.jpg instead of the recommended /dev/random. You must have a pretty enviable lifestyle, and I'd like to hear more about it.

/dev/urandom vs /dev/random? (1)

KlaymenDK (713149) | more than 2 years ago | (#39042837)

They will complain that '/dev/random is too slow (implicitly not realizing the urandom option)'

Please help me understand this bit. It sounds as if you would prefer urandom over random. I'm not skilled in randomness on Linux, so I checked Wikipedia (emphasis mine):

[...] A counterpart to /dev/random is /dev/urandom ("unlocked"/non-blocking random source) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. While [/dev/urandom] is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys. [...]

That, to me, sounds as if one should not use urandom if random is as all feasible.

So what's the deal here?

Re:/dev/urandom vs /dev/random? (4, Informative)

Junta (36770) | more than 2 years ago | (#39044337)

The problem with /dev/random is that it frequently is not feasible. The amount of usable entropy in /dev/random is rather low considering some needs. OpenSSL project itself defaults to urandom in fact. Frequently seeding /dev/urandom with /dev/random is a compromise.

Re:/dev/urandom vs /dev/random? (2)

Man Eating Duck (534479) | more than 2 years ago | (#39045093)

Frequently seeding /dev/urandom with /dev/random is a compromise.

That is interesting, how often is "frequently"? Can you increase randomness significantly by seeding from urandom, say, every 100 reads from random? Does this question even make sense? :)

I tried to google for more details, but my search terms returned a lot of noise.

Re:Where is this finger pointing? (2)

viperidaenz (2515578) | more than 2 years ago | (#39040081)

Is that the one along the lines of
int rand() {
return 3;// completely random seed, chosen by a roll of a dice
}

Re:Where is this finger pointing? (5, Funny)

hawguy (1600213) | more than 2 years ago | (#39040105)

Pretty sure this is the debian bug where the relevant packages for whatever reason decided to use a static number for the rng seed. There is an xkcd which pokes fun at this.

Oh good, it's not too late to post the obligatory XKCD link:

http://xkcd.com/221/ [xkcd.com]

Re:Where is this finger pointing? (3, Informative)

Anonymous Coward | more than 2 years ago | (#39040211)

First, if this were the Debian bug, I feel like they would have said so. I assume there is some other issue. I could be wrong though.

My understanding of the Debian bug was that the OpenSSL key generation code had a bug where /dev/random (or /dev/urandom, whichever it actually used) was being read incorrectly but in a way that happened to work. The line that read the random seed appeared to be dead code despite happening to accidentally read in the random seed, so the Debian maintainer for the package deleted the line. The randomizer was also seeded with the PID of the process, so there was still some randomness (i.e. the bug was not made obvious by the exact same key getting generated every time), but little enough that brute forcing all of the keys became trivial.

See this blog article [fsfe.org] for a description of the events. Another blog post linked to this bug report [openssl.org] in which the OpenSSL team claims the bug is in Valgrind/Purify, not in OpenSSL. I have not recently tried to read the code in detail, so I do not remember if there is actually a right way to fix the Valgrind/Purify warnings.

Re:Where is this finger pointing? (1)

Anonymous Coward | more than 2 years ago | (#39042093)

Actually, we do say that it is NOT the Debian bug. In the paper.

Re:Where is this finger pointing? (1)

compro01 (777531) | more than 2 years ago | (#39040297)

They didn't "decide" per se. What they did was read from uninitialized memory. Quick, simple, and cheap method to get a random value, and it works pretty much everywhere.

Then some helpful person noticed they were reading from an uninitialized variable and not understanding why, went ahead and "fixed" it and no one noticed for a few years.

Re:Where is this finger pointing? (1)

LordLucless (582312) | more than 2 years ago | (#39041541)

At the risk of taking a side-track - this is the perfect example of why I hate the concept of "self-documenting" code. A quick couple of words, indicating why the programmer was doing something weird, and this probably wouldn't have happened.

Re:Where is this finger pointing? (1)

AvitarX (172628) | more than 2 years ago | (#39042001)

I don't really know enough to comment, but it's /.

Couldn't a function name or some such be used to describe it?

presumable that read is being done somehow, weather it's an initiated variable, or through a function or pointer or what not (not knowing enough to comment), I would think it's totally reasonable to explain what's going on with a name like uninitializedForSakeOfRandomSeed.

Re:Where is this finger pointing? (1)

Mr. Slippery (47854) | more than 2 years ago | (#39044263)

What they did was read from uninitialized memory. Quick, simple, and cheap method to get a random value, and it works pretty much everywhere.

Um, no. Uninitialized memory is not random.

Re:Where is this finger pointing? (1)

19thNervousBreakdown (768619) | more than 2 years ago | (#39044813)

Uninitialized memory is not random and whoever thinks that shouldn't be allowed near a compiler until they not only understand that they're wrong, but why they're wrong.

In brief, when memory is released by a program, its contents aren't (usually) cleared. For anyone looking for random data, uninitialized memory is like a garbage bag full of old newspapers and post-its with passwords written on them. Not only is it just plain not random in that someone else might have a copy of the story you used for your "random" data, but there's private data in there (the password post-its), and a determined hacker might do some bag stuffing--fill as much RAM as possible with a predetermined pattern. If you manage to fill 80% of the computer's RAM (not even difficult to achieve, a 4GB computer would still have 819MB to run on), well, there's a pretty good chance you know what seed they used for the RNG.

You don't even need any special access to the computer to do that last one; get somebody to visit a huge page that's just your pattern--guess what's in their RAM now?

There are a ridiculous number of other ways that using uninitialized variables as a source of randomness is a terrible idea, these are just the first that came to mind.

Re:Where is this finger pointing? (0)

Anonymous Coward | more than 2 years ago | (#39040609)

No, we've made rather sure to publicly release massive blacklists of every such key, and people know how to duplicate the bug and generate the blacklist for any key size. It is standard procedure to weed out such keys.

Something else is afoot.

Re:Where is this finger pointing? (1)

micheas (231635) | more than 2 years ago | (#39040803)

Although debian/ubuntu generated keys could be living on RHEL/FreeBSD/SuSE machines that might not be checking for the bad keys. Considering at one point I would guessestimate 8% of keys or so were bad (based on debian and children market share) The debian disaster could still have some overhead.

But, you're right that 0.2% seems a little high for the overhang.

Re:Where is this finger pointing? (-1, Offtopic)

dbIII (701233) | more than 2 years ago | (#39041907)

Could you please read the replies to your comment on "Ontario Teachers' Union Calls For Health-Related Classroom Wi-Fi Ban" and address the questions asked.

Re:Where is this finger pointing? (1)

fatphil (181876) | more than 2 years ago | (#39043281)

If you RTFA, you'll see that they deliberately exclude the Debian fuckups from their sample set.

Pluralization is your friend (-1)

Anonymous Coward | more than 2 years ago | (#39039893)

"are bad" is what you meant to say. Way to annoy those of us who like decent grammar, you insensitive clod.

Maybe... (0)

Anonymous Coward | more than 2 years ago | (#39039977)

Since this is /., I of course did not RTFA, but...
It should be noted that RSA's security is dependent on Factoring a very large number m (~10^300 or more) which is the product of 2 prime numbers - let's say a and b.
It should be obvious that not all choices for "a" and "b" are good ones. For example if a=3, it's likely the very first thing that an attacker would try when attempting to factor the number. In number theory there are various theorems for factoring when the factors of a number have very special conditions (e.g. one factor is much smaller than the other, or whatever). I would guess that the article is simply an audit of existing crypto-software to make sure that these conditions on a and b were checked before generating m. Sounds like not all of them correctly verified that the primes they were choosing were good ones.
I chalk this up to coding error (more precisely a requirements omission), rather than a flaw in the actual crypto-system.

This is pretty bad (3, Informative)

Omnifarious (11933) | more than 2 years ago | (#39040139)

It doesn't affect the security of RSA overall, but it strongly affects the security of certain keys, rendering them totally compromised.

Think about it. A flaw in random number generation may well result in several people independently picking the same factor for their public key. Just run euclid's GCD algorithm on all pairs of public keys, which is O(n^2 * m) where n is the number of keys and m is their average length. Poof, all the ones that managed to 'accidentally' share a factor with another one pop out with their factors since a public key is just two big prime numbers multiplied together. Game over for those keys.

Steps to exploit:

1. You scoop up all the public keys you can find. People generally publish them. They're public keys.
2. You run GCD on each pair.
3. You find they share a common factor and you win! Both keys are now completely and totally compromised. You know the secret key for both of them.
4. Or... you find they share a common factor of 1. Oh, well, on to the next pair.

Re:This is pretty bad (4, Informative)

Omnifarious (11933) | more than 2 years ago | (#39040173)

As an addendum, this means that anything that was encrypted with this public key may now be decrypted. It also means that any signature it's ever made is now suspect as anybody who knew about this problem could've made that signature.

Of course, if someone is a protocol that implements forward secrecy and just using the RSA key to sign a diffie-helman exchange and then using the resulting key from that to encrypt their communications with a block cipher, they might be safe. Of course, the same bug might result in predictable diffie-helman keys too.

But any of those conversations may still have had a man-in-the middle.

Re:This is pretty bad (1)

chihowa (366380) | more than 2 years ago | (#39040983)

It doesn't affect the security of RSA overall, but it strongly affects the security of certain keys, rendering them totally compromised.

...Poof, all the ones that managed to 'accidentally' share a factor with another one pop out with their factors since a public key is just two big prime numbers multiplied together.

Does this mean that every key generated has a chance of rendering a previously existing key totally compromised? If that's the case, RSA is actually broken. There are only so many prime numbers, so as more keys are created, more keys will potentially be compromised. Please, tell me I'm wrong (using a car analogy if possible).

Re:This is pretty bad (1)

WuphonsReach (684551) | more than 2 years ago | (#39041149)

Does this mean that every key generated has a chance of rendering a previously existing key totally compromised? If that's the case, RSA is actually broken. There are only so many prime numbers, so as more keys are created, more keys will potentially be compromised. Please, tell me I'm wrong (using a car analogy if possible).

Just plan on using larger RSA keys (instead of 1024 bit), use the 3072 or 3200 or 4096 options when generating your RSA keys.

From a cursory glance, the 0.2% number is only for RSA 1024. They indicate that RSA 2048 also suffers the same issue, but since the numeric space is much larger the odds go way down. Look for the "12720" number and you'll see the mentions of this being a 1024bit key issue.

The other half of the issue is bad PRNGs affecting the generated keys in a bad way. Moving up to a larger key size doesn't fix this issue, so that's the more worrisome side - but can be fixed by implementation of better initialization and seeding of the PRNGs.

So unless you are guarding millions of dollars of secrets (maybe tens of millions), moving up to the larger key sizes (3000 bits or larger) is a very good idea. If you have more strict needs, then a code review of your PRNG and implementation is probably needed.

(And the general recommendation in the past few years is that it was time to move up to or past 2048 bit RSA anyway. The older 1024 bit RSA keys were very long in the tooth and could potentially be brute forced soon.)

Re:This is pretty bad (1)

Omnifarious (11933) | more than 2 years ago | (#39041235)

This is all because the random numbers were bad random numbers. Not very random. The chances of properly generated 1024-bit RSA keys colliding is extremely tiny. Much, much smaller that 0.2%.

Re:This is pretty bad (1)

russotto (537200) | more than 2 years ago | (#39041349)

Just plan on using larger RSA keys (instead of 1024 bit), use the 3072 or 3200 or 4096 options when generating your RSA keys.

From a cursory glance, the 0.2% number is only for RSA 1024. They indicate that RSA 2048 also suffers the same issue, but since the numeric space is much larger the odds go way down. Look for the "12720" number and you'll see the mentions of this being a 1024bit key issue.

No.

If your random number generator is bad, going from 1024 to 2048 doesn't get you better keys. It just gets you longer bad keys. Chances are if your RNG is messed up in such a way that it produces 512 bits of decidedly non-random data some of the time, it'll also produce 1024 bits of similarly non-random data. That's the nature of a PRNG, after all.

The 2048-bit moduli do seem safer, but since we don't know where these bad keys are coming from, there's no way to tell why. It has nothing to do with the inherent strength of 2048-bit keys, though.

Re:This is pretty bad (5, Informative)

Omnifarious (11933) | more than 2 years ago | (#39041223)

Does this mean that every key generated has a chance of rendering a previously existing key totally compromised? If that's the case, RSA is actually broken. There are only so many prime numbers, so as more keys are created, more keys will potentially be compromised. Please, tell me I'm wrong (using a car analogy if possible).

It does indeed mean this. But if the keys chosen are really and truly random, the chances of this ever happening are astronomically tiny.

But there are an infinite number of prime numbers. There's even a mathematical proof of this fact. :-)

More practically speaking, the actual distribution of primes is one prime every x/ln(x) numbers. This means that for numbers that are 1024 bits long, one in every 1024 of them is prime. This effectively means that the space of possible 1024-bit primes is 2^1023 (the top bit must always be one) / 2^10 = 2^1013. The chances that any two randomly generated 1013 bit numbers are exactly the same is extremely small. So small that you'd have to generate one such number for every proton or neutron in the solar system before you even got close to entering the realm of writing it down the percentage chance reasonably in non-exponent notation.

So, no, this doesn't break RSA, even though what you say is true.

Re:This is pretty bad (1)

WillerZ (814133) | more than 2 years ago | (#39042727)

There is another consideration.

When generating a 1024-bit random number which you hope is prime you take 1022 bits of random data and then put those bits in a 1-sandwich. For a 512-bit random probable prime you use 510 bits of random data. Having generated the probable prime you then test if it is very likely prime and if not you repeatedly add (or subtract, as long as your algorithm is consistent) two and retest until the test says you've found a winner. Because prime numbers are not perfectly evenly distributed (if they were we could find them very easily) there are sometimes large gaps between primes and random generation using this method will (over many attempts) pick primes following unusually large inter-prime gaps much more commonly than primes following unusually small inter-prime gaps.

So far as I know no-one knows what the distribution of 512- and 1024-bit primes really looks like: but supposing there were only 2^30 very-large gaps in the 1024-bit prime space it would be very worthwhile looking for common factors in 2048-bit RSA public keys.

Re:This is pretty bad (0)

Anonymous Coward | more than 2 years ago | (#39043089)

Why wouldn't you use the initial random data to seed a SPRNG and keep picking random potential primes instead of this bizarre sequential method? Problem solved?

Re:This is pretty bad (0)

TheRealMindChild (743925) | more than 2 years ago | (#39041419)

Seriously, you are going to make the same post twice? http://it.slashdot.org/comments.pl?sid=2671717&cid=39040039 [slashdot.org]

Re:This is pretty bad (1)

fatphil (181876) | more than 2 years ago | (#39043473)

And his algorithm's completely unworkable too, as it's O(n^2) in the number of keys. Fortunately there exist O(n log n) approaches which would be a hundred of thousand times quicker (and which Lenstra probably used).

Re:This is pretty bad (2)

fatphil (181876) | more than 2 years ago | (#39043315)

This is the third time you've posted your totally useless (or "naive" as Lenstra calls it in his paper) algorithm. Your technique would take 10 years (I trust Lenstra's estimate, I've not verified it myself) on his data set.

Re:This is pretty bad (1)

Omnifarious (11933) | more than 2 years ago | (#39044989)

I didn't read the paper, just guessed at the contents. And it doesn't surprise me that it's not optimal. I was posting it because it seemed there was a lot of disinformation that was being highly rated. I feel that what I posted was significantly less wrong, and gave people a better idea of the problem than the stuff I responded to.

I believe though that I got the basic idea of what Lenstra is up to.

I often find posts about cryptography to be extremely frustrating to read because of the complete lack of any kind of basic understanding of what the articles say or the consequences despite the fact that it's often quite clear.

Re:This is pretty bad (0)

Anonymous Coward | more than 2 years ago | (#39045421)

It may be naive but the core idea is there. And while the O(n log n) technique is not disclosed in the paper to make things harder for script kiddies, it should be obvious to anyone with some common sense.

ultimate solution.. (0)

Anonymous Coward | more than 2 years ago | (#39040253)

One time pad.. just commit the other half your encoded file to memory, and you're sweeet!

so one in 500 keys is bad... (0)

Anonymous Coward | more than 2 years ago | (#39040499)

how hard would it be to modify the generation algorithm to perform the same tests they ran and redo the generation until a key passes the test?

Re:so one in 500 keys is bad... (0)

Anonymous Coward | more than 2 years ago | (#39040901)

The issue is not that a single key is bad, it's that a family of keys are related in a predictable way (undoubtedly due to one or more tool with a total 0.2% market share generating them with bad randomness) -- each key alone is OK, there just shouldn't be a bunch alike. So you can't run the check on one key, you have to run it on a bunch.

If you use a correct algorithm with a correct RNG, there's a negligible chance of such a relationship with any other key, so there's no check needed anyway.

Can someone explain this to me? (1)

jonwil (467024) | more than 2 years ago | (#39040935)

From what I know about RSA, the security comes from having 2 large secret prime numbers (call them P and Q).

From reading this new paper, what these guys have been able to do is to take a pair of public keys and identify if one or both of the large secret prime numbers is the same between both keys. Am I reading things right, is this what they have found?

And just how dangerous is it in the real world? I would assume that unless you can find (using this new discovery) a public key where one or both of P and Q match a key you already have the private half of, you cant decrypt.

Re:Can someone explain this to me? (3, Informative)

YoungHack (36385) | more than 2 years ago | (#39041041)

The process of finding that one of P or Q matches actually tells you that P or Q. That means instantly that both public keys can be factored. Hence both private keys can be determined. You don't need prior knowledge of either private key ahead of time, just the public keys.

Re:Can someone explain this to me? (1)

jonwil (467024) | more than 2 years ago | (#39041119)

So are these keys a sign of weaknesses in specific implementations of RSA key generation or could they have arisen by pure chance due to 2 random number generators picking the exact same number (or is it a combination of both)?

Re:Can someone explain this to me? (3, Informative)

russotto (537200) | more than 2 years ago | (#39041259)

So are these keys a sign of weaknesses in specific implementations of RSA key generation or could they have arisen by pure chance due to 2 random number generators picking the exact same number (or is it a combination of both)?

The primes for RSA-1024 should be 512 bits long. There are about 2^502 primes 512 bits long. By birthday paradox statistics, we should expect a collision in approximately every 2^251 choices, which is considerably less than 2 in 1000.

So no, it's not chance.

(all numbers extremely approximate)

Re:Can someone explain this to me? (1)

WuphonsReach (684551) | more than 2 years ago | (#39041231)

And just how dangerous is it in the real world?

Probably dangerous enough that if you were still using 1024 bit RSA keys, you should stop and move up to the larger key sizes above 3000 bit (and make sure that your PRNG isn't broken).

But I'm pretty sure that 1024 bit RSA has been "not recommended" for at least a few years. The ssh-keygen in modern Linux systems uses a default of 2048 bits and I think GPG also uses 2048 bits as the default for a while now. The openssl documents on their website also recommend 2048 bits as the minimum.

The 0.2% number only applied to the 1024 bit keys that the researchers looked at. I didn't find anywhere that they gave numbers for 2048 bit keys.

Re:Can someone explain this to me? (1)

fatphil (181876) | more than 2 years ago | (#39043537)

The problem is entirely due to poor PRNGs (poorly seeded, more like). You get precisely no more scurity from using 3072-bit keys than using 1024-bit keys. If the prime generation process is deterministic and reproducable due to bad seeding, then generating identical 1536-bit primes is exactly as likely as generating 512-bit primes.

Looking at their figures and charts, and assuming all other things were equal, I would expect about 300-350 2048-bit keys to have been found.

Re:Can someone explain this to me? (1)

betterunixthanunix (980855) | more than 2 years ago | (#39043311)

From reading this new paper, what these guys have been able to do is to take a pair of public keys and identify if one or both of the large secret prime numbers is the same between both keys. Am I reading things right, is this what they have found?

Correct.

I would assume that unless you can find (using this new discovery) a public key where one or both of P and Q match a key you already have the private half of, you cant decrypt.

Not correct; if you have two RSA keys with a common prime factor, you can use the GCD to determine what that common prime is (normally, the GCD would be 1, because the two moduli would have no common factors), and then simply divide by that prime to determine both of the secret keys. The keys with common prime factors are effectively worthless, and worse still, someone happening to generate one of the same primes that you generated will leak your secret key to the entire world.

Now, to be fair, DSA and ElGamal have a similar problem when it comes to nonces; see, for example, the Debian bug, the Sony bugy, etc. The real lesson here is that good random number generation is vital to public key cryptography (but we knew that already) and that you should be extra cautious when generating your RSA keys (or when using your DSA and ElGamal keys).

Re:Can someone explain this to me? (1)

jonwil (467024) | more than 2 years ago | (#39043467)

ok, so it not as bad as it sounds because you need to find a public key with P, Q or both matched to the key you want.

e.g. its not like someone is going to be able to take the signing key for the XBOX 360 game disks and magically crack that open unless somehow by magic someone posted another public key out there that has the same public key or Microsoft REALLY screwed up their cryptography.

Re:Can someone explain this to me? (1)

betterunixthanunix (980855) | more than 2 years ago | (#39043515)

On the other hand, China might not care which American corporations they can read the emails of; they might just take whatever they can get. For a targeted attach, this is going to be hard to exploit -- it is like trying to guess one of the primes in an RSA number.

Worse still, it may be the case that some particular software product or setup is causing these keys to be generated; this could make it easier to attack people using that software. Those weak keys might have been generated by a cloud service provider's VM images, or by a particular OS installer or software installation package, or by some common IT practice involving the management of large numbers of systems.

Self-signed certificates (1)

g1zmo (315166) | more than 2 years ago | (#39041269)

the number of certificates that had no reason to be trusted in the first place (being self signed)

I trust my self-signed certificate.

Re:Self-signed certificates (0)

Anonymous Coward | more than 2 years ago | (#39041611)

CA Certificates are self Signed, that's why they're required to be installed in the Browser before shipping.

I also trust my self signed certs. The posted probably just worded it badly.

No reason to be trusted in the first place (2)

rdebath (884132) | more than 2 years ago | (#39042315)

"No reason to be trusted in the first place (being self signed)."

At first this struck me as wrong. Mostly because all CA certificates are self signed.

The I found a question, "why don't the CAs sign each other's certificates?"
It's possible, they just never do it.
Is it perhaps that they don't trust each other!
If they don't trust each other why should we trust them ?

Re:No reason to be trusted in the first place (1)

TheLink (130905) | more than 2 years ago | (#39042435)

Huh, they do. That's WHY you should trust CA certs less.
See: http://groups.google.com/group/mozilla.dev.security.policy/browse_thread/thread/7ba51ca49de0f6cf [google.com]
Summary: At least one CNNIC CA cert (think Chinese Gov) is signed by Entrust. So by default most browsers that trust Entrust will also automatically trust CNNIC.

Which is not so good if one day the Chinese Gov decided to MITM you.

Re:No reason to be trusted in the first place (1)

rdebath (884132) | more than 2 years ago | (#39042479)

From your link it wasn't a root certificate when it was signed, ie: by the "in the browser definition".

Basically you've given me proof that they could sign each other's certificates. But that they don't because they're untrustworthy.

Hmmm.

Re:No reason to be trusted in the first place (1)

TheLink (130905) | more than 2 years ago | (#39042637)

1) The complaint in the link stated that a CNNIC cert was in the Mozilla browser store, AND it was signed by Entrust.

2) I've given you proof that CAs _do_ sign each other's certificates. Whether they are included in the browser depends on the Browser bunch and nothing to do about whether the CAs trust each other or not. In this case the CNNIC cert was likely to have been included by Mozilla.

So I don't see how you can say that I've given you proof "that they could sign each other's certificates. But that they don't because they're untrustworthy. ". But you can keep sticking your head that deep in the sand if you want.

FWIW I use Certificate Patrol.

It does not matter the strength of your public key (4, Insightful)

KlaymenDK (713149) | more than 2 years ago | (#39042901)

It does not matter the strength of your public key if nobody knows to demand it.

THIS! This is the core problem! Everybody knows email, and most people know that you shouldn't share your password with others, but nooobody knows about proper signatures and how to work with them.

If each and every digital signature out there was useless, how much of our total bandwidth would be compromised?
The painful answer is, at most the percentage that is signed in the first place, which is a drop in the proverbial ocean.

Cory Doctorow has a statement about obscurity being a far bigger threat to authors than piracy, and I posit that an analog can be drawn for obscurity of security practices, the population at large, and privacy/security.

It's hopeless to encrypt all your email unless your peers (including granny and junior) knows how to process such email, and knows to be suspicious of unsigned communications. If only some of the globally popular communications services would have the guts to enable, and indeed, enforce this. (Google and Facebook, I'm looking at you.) Yeah I know, they wouldn't be able to stay in *business* if they did that (which nicely highlights what, or rather who, the "product" really is).

Determine if a key is weak? (0)

Anonymous Coward | more than 2 years ago | (#39043269)

Is there currently an easy way to determine if a/our key(s) are weak in this way and should be replaced?

Are all those keys real? (1)

svick (1158077) | more than 2 years ago | (#39043303)

I know about at least one public key that was intentionally created very weak, that is, one that is easily crackable.

Why was it made? As a part of "hacking" competition.

I wonder how many of the keys they found were bad were never used for anything serious, like the one I mentioned.

encryption is not identification (0)

Anonymous Coward | more than 2 years ago | (#39043345)

The whole issue of trust conferred by certificates is wrong. It assumes that because I have the some certificate I am who I claim to be and further that I am trustworthy. A rogue agent can use the certificate without authority because they have access to it internally. It certainly doesn't infer that the agent is trustworthy. The whole model relies on people who have something to gain by deception certifying each other are trustworthy.

Another explaination (0)

Anonymous Coward | more than 2 years ago | (#39043357)

"I would like to float one possible explanation that involves a simple, but subtle way to attack computer security: using malware injection to subvert a computer system's random number generation process. All that is necessary is to insert code that intercepts calls to the system's random number generator and replaces the proper returned value with a number generated by an algorithm crafted by the attacker. This algorithm would be designed to have a relatively small state space, producing say only a few million or even billion possible results. These would appear random enough to simple inspection, but the attacker could test the limited number of possible keys generated using the cooked algorithm to find the correct key, thus defeating all security. Note that this attack does not require the infected computer transmit stolen keys, or logged passwords. Indeed the malware need not communicate with the outside world in any way, making detection very difficult.

I propose that the Lenstra research may have found a signature of this attack in operation. That is, the weakness in the substituted random number generator on infected machines may have caused the duplications." (from Diceware blog, with permission)

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...