×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Linux RNG May Be Insecure After All

Unknown Lamer posted about 6 months ago | from the proofs-ahead dept.

Encryption 240

Okian Warrior writes "As a followup to Linus's opinion about people skeptical of the Linux random number generator, a new paper analyzes the robustness of /dev/urandom and /dev/random . From the paper: 'From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice.'" Of course, you might not even be able to trust hardware RNGs. Rather than simply proving that the Linux PRNGs are not robust thanks to their run-time entropy estimator, the authors provide a new property for proving the robustness of the entropy accumulation stage of a PRNG, and offer an alternative PRNG model and proof that is both robust and more efficient than the current Linux PRNGs.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

240 comments

Random Post Number (-1)

Anonymous Coward | about 6 months ago | (#45128385)

I can tell Slashdot is using Linux because I've got post number 2.

Random number generators are hard (2, Insightful)

moonwatcher2001 (2710261) | about 6 months ago | (#45128411)

Linus always says Linux is perfect. Linus can be wrong.

Re:Random number generators are hard (0)

TheloniousToady (3343045) | about 6 months ago | (#45128493)

Linus always says Linux is perfect. Linus can be wrong.

Clearly, Nomad should sterilize him. (And BTW, Nomad prefers that you call it the "GNU/Linux System".)

Re: Random number generators are hard (4, Informative)

Anonymous Coward | about 6 months ago | (#45129187)

Not all Linux systems use the GNU userspace.

many times a day he says Linux needs changes (5, Informative)

raymorris (2726007) | about 6 months ago | (#45128877)

Linus signs off on many changes everyday. He does expect you to read the code before trying to change it. That was the problem before - someone put up a change.org petition that made clear they had no idea how it worked.

Re:Random number generators are hard (5, Interesting)

WaywardGeek (1480513) | about 6 months ago | (#45129061)

No, RNGs are easy. Super easy. Just take a trustworthy source of noise, such as zener diode noise, and accumulate it with XOR operations. I built a 1/2 megabyte/second RNG that exposed a flaw in the Diehard RNG test suite. All it took was a 40 MHz 8-bit A/D conversion of amplified zener noise XORed into an 8-bit circular shift register . The Diehard tests took 10 megabytes and said if it found a problem. My data passed several times, so I ran it thousands of times, and found one test sometimes failed on my RNG data. Turns out the Diehard tests had a bug in that code. Sometimes the problem turns out to be in the test, not the hardware.

Re: Random number generators are hard (5, Insightful)

Anonymous Coward | about 6 months ago | (#45129219)

No, RNGs are easy. Super easy. Just take a trustworthy source of noise

Therein lies the tricky part. Getting a trustworthy source of noise is harder than you may think. Especially when you're writing software with no control over the hardware it runs on.

Re:Random number generators are hard (4, Informative)

Anonymous Coward | about 6 months ago | (#45129301)

Good for you. That is still not a viable solution for generating cryptographic keys, IVs, salts, and so on. Two drawbacks with your idea:

1. Too slow. You need far more random data than a zener diode can generate. You could combine many of them, but then you need to combine them in the right way.

2. Unreliable. Zener diodes are easy to affect with temperature, and you need to make sure that hardware flaws don't make them produce 1 more often than 0 (or the other way around).

This is why we need software RNGs. We take a good hardware-based seed from multiple sources (combined using SHA256 or something), and then use that to seed the CSPRNG (not just a PRNG). The CSPRNG then generates a very long stream of secure random data which can then be used.

I'm not too pleased about the design of Fortuna, but it seems like one of the better choices for how to combine HW input and generate an output stream.

Re:Dilbert RNG (1)

Anonymous Coward | about 6 months ago | (#45128459)

Came here for the "Nine Nine Nine Nine Nine Nine". Left happy.

Re:Dilbert RNG (0)

Anonymous Coward | about 6 months ago | (#45129383)

I do what I can..

Re:Dilbert RNG (4, Funny)

The MAZZTer (911996) | about 6 months ago | (#45128479)

Re:Dilbert RNG (-1)

Anonymous Coward | about 6 months ago | (#45128899)

I didn't even click on the link and knew it was some fag linking xkcd. It's not clever. It's not funny. Just the subject containing something about RNG, with a link under it and not even a short, useless, one sentence post shows the kind of unoriginal, uninspired, idiot is making the post.

It was funny to read when it came out. It's even funny when clicking on the Random button on the site and seeing it. It's NOT funny when someone links to it from a one-sentence post and thinks they're so fucking clever to have discovered xkcd.

You probably still use lmgtfy and think you're so damn clever.

It means in real life, you're an unoriginal hipster doofus.

Got anything to do with sanitizing inputs to a SQL database, etc.? Link to Bobby Tables. Got a nerd-project slow-ass turing machine? Like a minecraft logic circuit from redstone? Link to the one where it's some guy alone in the world making a computer out of rocks. Got a story about password security or encryption? Link to the one where they beat the password out of the guy with a wrench.

Fuck off. You're not clever.

Re:Dilbert RNG (5, Funny)

AlphaWoIf_HK (3042365) | about 6 months ago | (#45128945)

I didn't even click on the link and knew it was some fag linking xkcd.

Well, it is a link that leads to xkcd.com, so it's not exactly difficult to figure out that that's where the link leads.

Re:Dilbert RNG (0)

Anonymous Coward | about 6 months ago | (#45129039)

another copy-paste response, lame.

Re:Dilbert RNG (5, Insightful)

jmhobrien (2750125) | about 6 months ago | (#45129319)

I think you need to re-assess your attitude. Perhaps some people have not seen those links? Did you consider the possibility that the comment was not for you? Like rhetorical questions?

Lighten up or Fuck off. You are taking this way too seriously.

Re:Dilbert RNG (0)

Anonymous Coward | about 6 months ago | (#45129377)

http://xkcd.com/1/

Dupe! (5, Funny)

VanessaE (970834) | about 6 months ago | (#45128423)

.....wait! it's not what you think.

"a new paper analyzes the robustness of /dev/urandom and /dev/urandom."

So now we're putting the dupes together into the same summary? Jeez, can't we at least wait a few hours first?

Re:Dupe! (3, Funny)

Anonymous Coward | about 6 months ago | (#45128715)

Coincidence. They were chosen at random.

Re:Dupe! (1)

Anonymous Coward | about 6 months ago | (#45128759)

Slashdot's random summary-inconsistency generator is still pretty robust.

At what scope of time or size of output data? (4, Insightful)

fishnuts (414425) | about 6 months ago | (#45128427)

At what scope/scale of time or range of values does it really matter if a PRNG is robust?
A PRNG seeded by a computer's interrupt count, process activity, and sampled I/O traffic (such as audio input, fan speed sensors, keyboard/mouse input, which I believe is a common seeding method) is determined to be sufficiently robust if only polled once a second, or for only 8 bits of resolution, exactly how much less robust does it get if you poll the PRNG say, 1 million times per second, or in a tight loop? Does it get more or less robust when the PRNG is asked for a larger or smaller bit field?

Unless I'm mistaken, the point is moot when the only cost of having a sufficiently robust PRNG is to wait for more entropy to be provided by its seeds or to use a larger modulus for its output, both rather trivial in the practical world of computing.

Re:At what scope of time or size of output data? (-1)

Anonymous Coward | about 6 months ago | (#45128593)

Linux still sucks a dick and you're sucking it's nut sack. What the fuck does that make you, bitchtard?

Re:At what scope of time or size of output data? (0)

Anonymous Coward | about 6 months ago | (#45128969)

it's means it is. What the fuck does that make you, apostrotard?

Re:At what scope of time or size of output data? (4, Insightful)

jythie (914043) | about 6 months ago | (#45128713)

The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.

Re:At what scope of time or size of output data? (2, Interesting)

Anonymous Coward | about 6 months ago | (#45128911)

Right up until someone figures out how to use those "ultimately not all that important" technical weaknesses to good use. And then suddenly every single last deployed system might turn out to be vulnerable. Which might already be the case, but has been kept carefully secret for a rainy day.

Of course, usually it's a lot of "theoretical" shadow dancing, and given the nature of the field some things will indubitably remain unclear forever (unless another Snowden stands up, who just happens to give us a clear-enough answer), but there are an awful lot of people working exactly on finding this sort of entirely theoretical stuff and turning it into practical usability, while keeping their findings secret.

So, unfortunately, this sort of mathematical tin foil hattery is necessary, because all we know is that we're behind the curve (we only know what's made public), not how much behind the curve, and we have the disadvantage of the cost of fixing a large installed base.

We more or less can't afford not to peep pixels, because we simply don't know how much is too much, and how much is too little. This despite a foul-mouthed "looking ahead is hard!" populist figure with a noted lack of technical innovation skill.

Re:At what scope of time or size of output data? (2)

fuzzyfuzzyfungus (1223518) | about 6 months ago | (#45128943)

The headline is somewhat sensational. There is a pretty wide gulf between an abstract and rather arbitrary metric and a practical vulnerability. This is kinda the security equivalent of pixel peeping, a fun mathematical exercise at best and pissing contest at worst, but ultimately not all that important.

I am definitely not a statistician; but there may be applications other than crypto and security-related randomization that break (or, rather worse, provide beautifully plausible wrong results) in the presence of a flawed RNG.

Re:At what scope of time or size of output data? (5, Interesting)

steelfood (895457) | about 6 months ago | (#45129193)

Useless for you. But the NSA might disagree. The math is what keeps them at bay. If the math shows cracks, it'd be certain that the NSA has figured out some kind of exploit. Keep in mind that the NSA doesn't rely on just one technique, but can aggregate multiple data sources. So those interrupts that the RNG relies on can be tracked, and the number that results can be narrowed to a searchable space. Keep in mind that 2^32, which is big by any human standard, is minuscule for a GPU.

Re:At what scope of time or size of output data? (5, Insightful)

jhol13 (1087781) | about 6 months ago | (#45129419)

Your attitude is exactly what is wrong with security. Quite a few still use MD5 because "it is not that broken". Linus really should take a look in this new provably better method and adapt it ASAP and not wait until it bites hard.

Re:At what scope of time or size of output data? (-1)

Anonymous Coward | about 6 months ago | (#45129535)

Your argument is very informed and persuasive. You surely know as much about cryptography and PRNGs as you know about structure of Linux kernel dev group (hint: Linus doesn't maintain the kernel PRNG, Theodore Ts'o does. You can find his opinion on this matter in Schneier's blog, linked elsewhere on this page).

They begin with making wrong assumptions about Linux PRNG behavior and the attack involves pretty much direct access to feed the PRNG entropy streams controlled by adversary. It is clearly comparable with MD5 situation.

Re:At what scope of time or size of output data? (5, Interesting)

Anonymous Coward | about 6 months ago | (#45128955)

First of all, not all computers are PCs. A server running in a VM has no audio input, fan speed, keyboard, mouse, or other similar devices that are good sources of entropy. A household router appliance running Linux not only has no audio input, fan, keyboard, or mouse -- it doesn't even have a clock it can use as a last resort source of entropy.

Second, there are many services that require entropy during system startup. At that point, there are very few interrupts, no mouse or keyboard input yet, and some of the sources of entropy may even be initialized yet.

One problematic situation is a initializing a household router. On startup it needs to generate random keys for its encryption, TCP sequence numbers, and so on. Without a clock, a disk, a fan, or any peripherals, the only good source of entropy it has is network traffic, and there hasn't been any yet. A router with very little traffic on its network may take ages to get enough packets to make a decent amount of entropy.

dom

Re:At what scope of time or size of output data? (0)

Anonymous Coward | about 6 months ago | (#45129269)

"A household router appliance running Linux not only has no audio input, fan, keyboard, or mouse -- it doesn't even have a clock it can use as a last resort source of entropy."

It probably has a radio with signals of varying strengths and packet losses. It also probably has a multitude of routed, nonrouted, and broadcast packets on its various network interfaces. And on top of that, it's connected to a global network of nodes with varying packet delivery times, and where at any time it can ask for a multitude of continuously changing and a least partially stochastic metrics (e.g. exchange rates, 4th word of news headlines, youtube +1 counts, etc, etc).

Re:At what scope of time or size of output data? (1)

Anonymous Coward | about 6 months ago | (#45129313)

But it's made by dlink et all and thus unlikely to put any of that in the entropy pool.

They're the sort who would use a user agent string for a backdoor so that they can reuse their web app code to do stuff.

Re:At what scope of time or size of output data? (3)

kscguru (551278) | about 6 months ago | (#45129471)

VMs do have good sources of entropy ... while a server indeed has no audio / fan / keyboard / mouse inputs (whether physical or virtual), a server most certainly does have a clock (several clocks: on x86, TSC + APIC + HPET). You can't use skew (as low-res clocks are implemented in terms of high-res clocks), but you still can use the absolute value on interrupts (and servers have a lot of NIC interrupts) for a few bits of entropy. Time is a pretty good entropy source, even in VMs: non-jittery time is just too expensive to emulate, the only people who would try are the blue-pill hide-that-you-are-in-a-VM security researchers.

The real security concern with VMs is duplication ... if you clone a bunch of VMs but they start with the same entropy pool, then generate an SSL cert after clone, the other SSL certs will be easily predicted. (Feeling good about your EC2 instances, eh?) This isn't a new concern - cloning via Ghost has the same problem - but it's easier to get yourself into trouble with virtualization.

Re:At what scope of time or size of output data? (0)

Anonymous Coward | about 6 months ago | (#45129489)

Every scope, every range. It's supposed to be a CSPRNG. It isn't. So it needs replacing.

I'm not sure about their suggestion, offhand, but I'd suggest maybe talking to Schneier. Skein-based Fortuna, maybe?

urandom uber alles (1)

Anonymous Coward | about 6 months ago | (#45128433)

"....robustness of /dev/urandom and /dev/urandom"

i will fight to the death any one who says /dev/urandom is less/more robust than /dev/urandom

It's confirmed! (1)

Anonymous Coward | about 6 months ago | (#45128445)

"As a followup to Linus's opinion people skeptical of the Linux random number generator, a new paper analyzes the robustness of /dev/urandom and /dev/urandom .

I just checked /dev/urandom and /dev/urandom. The bitstreams are identical! Damn you, NSA!

Yawn (5, Interesting)

Anonymous Coward | about 6 months ago | (#45128453)

The output of a software RNG, aka PRNG (pseudo random number generator), is completely determined by a seed. In other words, to a computer (or an attacker), what looks like a random sequence of numbers is no more random than, let's say,

(2002, 2004, 2006, 2008, 2010, 2012...)

However, the PRNG sequence is often sufficiently hashed up for many applications such as Monte Carlo simulations.

When it comes to secure applications such as cryptography and Internet gambling, things are different. Now a single SRNG sequence is pathetically vulnerable and one needs to combine multiple SRNG sequences, using seeds that are somehow independently produced, to provide a combined stream that hopefully has acceptable security. But using COTS PC or phone doesn't allow developers to create an arbitary stream of independent RNG seeds, so various latency tricks are used. In general, these tricks can be defeated by sufficient research, so often a secure service relies partly on "security through obscurity", i.e. not revealing the precise techniques for generating the seeds.

This is hardly news. For real security you need specialized hardware devices.

Re:Yawn (4, Informative)

Anonymous Coward | about 6 months ago | (#45129031)

"For real security you need specialized hardware devices."

Indeed. And it's worth considering that the Raspberry Pi has a Hardware RNG built in. Also, the tools to make it functional are in the latest updates to Raspian...

Did I mention that an RPi B is actually cheaper than the least expensive HWRNG-device (the Simtec Entropy Key - which is waaaayyyy backordered at the moment) - and about three times faster?

Re:Yawn (0)

Anonymous Coward | about 6 months ago | (#45129279)

Frankie!!! I told you to stop posting my pin codes to the Internet!!!

Re:Yawn (0)

Anonymous Coward | about 6 months ago | (#45129325)

PRNG != CSPRNG

It is deterministic, but not predictable. Also, while I definitely agree you need good hardware devices, you definitely also need a CSPRNG (e.g. Fortuna) to make good use of the hardware, otherwise you are suceptible to hardware flaws that damage your security.

Unpossible (1)

Anonymous Coward | about 6 months ago | (#45128455)

"a new paper analyzes the robustness of /dev/urandom and /dev/urandom"

cool, they really should make random numbers redundant

next, we should make Slashdot moderators redundant.

and then, Slashdot articles redundant.

yay!

But But But (0)

Anonymous Coward | about 6 months ago | (#45128461)

It'z teh Linux!!!!!1111oneoneoneeleventyone!!!!!

Hear me out: Locally Generated Entropy Pool (4, Interesting)

Anonymous Coward | about 6 months ago | (#45128509)

So, with all the 'revelations' and discussion surrounding this and encryption over the past several weeks, I've been wondering if a local hardware based entropy solution could be developed. By 'solution', I mean an actual piece of hardware that takes various static noise from your immediate area, ranging from 0-40+kHz( or into MHz or greater?), both internal and external to case, and with that noise builds a pool for what we use as /dev/random and /dev/urandom. Perhaps each user would decide what frequencies to use, with varying degrees of percentage to the pool, etc.. etc...

It just seems that with so much 'noise' going on around us in our everyday environments, that we have an oppurtunity to use some of that as an entropy source. Is anyone doing this, cause it seems like a fairly obvious implementation.

Re:Hear me out: Locally Generated Entropy Pool (2)

mrchaotica (681592) | about 6 months ago | (#45128585)

I linked to LavaRnd in a reply to an earlier post, but at the risk of being redundant, I'll mention it again [lavarnd.org].

Re:Hear me out: Locally Generated Entropy Pool (1)

Anonymous Coward | about 6 months ago | (#45128807)

It's not a risk of being redundant. It's a certainty. Unless you are from England. Then what you have done is not redundant, it was useful though repeated.

Just sayin'

Re:Hear me out: Locally Generated Entropy Pool (1)

Aighearach (97333) | about 6 months ago | (#45128951)

Unless this abstract can be translated into an actual attack of some sort, the chances are mostly in the area of creating new holes, not really in improving security.

Re:Hear me out: Locally Generated Entropy Pool (3, Informative)

WaywardGeek (1480513) | about 6 months ago | (#45129113)

Yes. Hardware high-speed super-random number generators are trivial. I did it with amplified zener noise through an 8-bit 40MHz A/D XORed onto an 8-bit ring-shift register, generating 0.5MB/second of random noise that the Diehard tests could not differentiate from truly random numbers. XOR that with an ARC4 stream, just in case there's some slight non-randomness, and you're good to go. This is not rocket science.

Re:Hear me out: Locally Generated Entropy Pool (2, Insightful)

Anonymous Coward | about 6 months ago | (#45129515)

RC4 is cracked, and a 120MHz radio signal can bias your generator.

(That wiped the smile off your face, didn't it!)

Two unstable Os, at different frequencies, work well, with an appropriate whitener and shielding. You need to determine when measurable entropy falls below a certain level from raw and cease. Use that to seed a pure CSPRNG - the Intel generator uses this construct with the DRBG_AES_CTR (which seems to be OK, if you trust AES, unlike the hairbrained and obviously backdoored DRBG_Dual_EC). You also need a mechanism for software to read the pre-whitened seed so it can estimate, or replace your CSPRNG with another (like Fortuna).

So write a better one (1)

Gothmolly (148874) | about 6 months ago | (#45128535)

These guys had the time, brains and resources available to do a full breakdown of how /dev/random might not be so random.... why not go all the way, submit a patch and fix it?

Re:So write a better one (-1)

Anonymous Coward | about 6 months ago | (#45128787)

Because they work for Microsoft.

Re:So write a better one (0)

Frosty Piss (770223) | about 6 months ago | (#45128797)

Hey, if *LINUS* say the RNG is fine and all you idiots know NOTHING, hey, that's good enough for me.

Re:So write a better one (1)

interval1066 (668936) | about 6 months ago | (#45128983)

It might have been a scholarly F-U to Linus, if so, why would they fix it?

Re:So write a better one (0)

Anonymous Coward | about 6 months ago | (#45129303)

It might have been just business as usual with a click-bait title slapped on by editors.

A lot of research papers go like: "Here's a problem we've thought up, here's a formal definition, here's a method to identify it, here's application of this method to existing implementations showing how awful they are, (optionally) here's our take on the problem and benchmark on our arbitrary criterion against those implementations showing how everyone could fix it if they'd just listen to us. PS: We didn't research how practical are requirements on successful exploit and our proposed implementation, but dang are our formulas neat!" They need to prove they did a ground-breaking, novel and important research after all, don't they?

And then news sites go and write articles reading "Wereallgonnadie!!11 (If all the planets and stars align juuuust right)"

Re:So write a better one (5, Informative)

Z00L00K (682162) | about 6 months ago | (#45129537)

"Not so random" means that you can mathematically calculate how likely it is that you can predict the next number over a long time. If you can predict the next number with an accuracy of 1 in 250 while the random generator provides 1 in 1000 then the random generator isn't that random.

Many random generators picks the previous value as a seed for the next value, but that is definitely predictable. Introduce some additional factors into the equation and you lower the predictability. One real problem with random generators using previous value as a seed without adding a second factor is that they can't generate the same number twice or three times in a row (which actually is allowed under the randomness rules).

It's a completely different thing to create a true random number. For a 32 bit number you essentially should have one generator source for each bit that don't care about how the bit was set previously. It is a bit tricky to create that in a computer in a way that also allows for fast access to a large number of random numbers and prevent others from reading the same random number as you do.

For computers it's more a question of "good enough" to make prediction an unfeasible attack vector.

I knew the day would come (2)

OhANameWhatName (2688401) | about 6 months ago | (#45128549)

From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom

The internet must have finally run out of porn.

It's Not That Hard, Folks (-1)

Anonymous Coward | about 6 months ago | (#45128581)

Pick two cryptographically secure hashes, like Skein and Keccak. Seed them (and re-seed them periodically with entropy as you go, just like how /dev/urandom works) and then each hash cycle, feed half their output back into their input and the other half XOR with the output half of the other hash as the CSPRNG stream. It's functionally the equivalent of XORing the output of two /dev/urandoms, each one based on a different hash.

Re:It's Not That Hard, Folks (2)

queazocotal (915608) | about 6 months ago | (#45128637)

That's not quite the point.
The two random devices are there not to be alternate sources of randomness, but to be a good source of provably random numbers gained from hardware randomness mixed into the entropy pool, and another source of cryptographically random numbers seeded from the first - but with perhaps orders of magnitude less entropy per output bit compared to input bits.
The first source will stop outputting when it runs low on entropy.

Incorrect and irresponsible headline (5, Interesting)

Anonymous Coward | about 6 months ago | (#45128605)

I swear, if I worked for the NSA I'd be pushing out headlines like this to make people ignore real security issues...

The article is a highly academic piece that analyzes the security of the linux rng against a bizarre and probably pointless criteria: What is an attacker's ability to predict the future output of the RNG assuming he knows the entire state of your memory at arbitrary attacker selected points in time and can add inputs to the RNG. Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full. Instead they offer the laughable suggestion of using AES in counter mode as a "provably secure" alternative.

(presumably they couldn't get a paper published that said "don't stop adding entropy just because you think the pool is at maximum entropy", either because it was too obviously good a solution or because their reviewers might have noticed that Linux already did that)

Re:Incorrect and irresponsible headline (5, Informative)

FrangoAssado (561740) | about 6 months ago | (#45128861)

Their analysis that the linux rng is insecure under this (rather contrived) model rests on an _incorrect_ assumption that Linux stops adding to the entropy pool when the estimator concludes that the entropy pool is full.

Exactly. The maintainer of the /dev/random driver explained this and a lot more about this paper here [ycombinator.com].

Re:Incorrect and irresponsible headline (0)

Anonymous Coward | about 6 months ago | (#45129475)

So can you put in a different estimation value, or make it run 10-100-1000x more entropy than it used to by "default"?

Re:Incorrect and irresponsible headline (0)

AHuxley (892839) | about 6 months ago | (#45128939)

The NSA and GCHQ did all they could to stay out of the press from the 1960-80's. In the 1990's they thought they had the best cover ever: the internet is just too "big" and crypto offered for export too complex and open.
Add in law reform, projected data storage costs and wise sock puppets - everything was looking great for ongoing domestic surveillance infrastructure.
The political leadership, color of law, telcos, software brands, press, academics, hardware brands, lawyers where all tamed as needed.
The problem with the headlines is the lack of fear by the public at this point in time.
People are LOL, interested, creative, getting involved, rethinking their education, thinking about code, looking back at their pasts for that moment of total crypto failure.
An East German view would be: coders are out front of a Church with source code lines, talking, compiling, sharing, fixing and educating in public.
They can see the cameras, know they will be in a domestic database.
They dont care about academic 'trouble', no flight lists, that anyone of them could be a corporate or gov "agent provocateur" - they are getting on with fixing code and examining hardware, line by line this time.
See the reaction of the AC sockpuppets vs years ago. The trance of citation needed no longer works.
The AC sockpuppets have to show their true traits, personal attacks, anger, the right to rule and the classic "I saw something" and now I want a police state.

Re:Incorrect and irresponsible headline (2)

Aighearach (97333) | about 6 months ago | (#45128971)

Yeah, because random number generators is like pr0n for the masses, and will distract them from boring stuff like the government monitoring and storing that really stupid thing they said about their boss, right next to their cybersex session with their mistress.

Whatever happened to (4, Interesting)

mark_reh (2015546) | about 6 months ago | (#45128629)

zener diodes biased into avalanche mode to generate random noise? I don't think even the NSA has figured out how to hack laws of thermodynamics.

Re:Whatever happened to (0)

sanitycrumbling (956413) | about 6 months ago | (#45128723)

So, as somebody who has no idea what that means (sorry) - how can that be truly random? Aren't there some laws of physics that would govern how it would work?

Re:Whatever happened to (2)

Nemyst (1383049) | about 6 months ago | (#45128851)

Laws don't mean you will know for sure how a measurement will turn out. One of the fundamental tenets of quantum mechanics is that you cannot determine in advance in which state a particle will be. If it is in a superposition of 0 and 1, for instance, there will be a likelihood for it to, once measured, be 0 or 1, and that likelihood may be biased, but you cannot know any of this unless you created the bias in the first place. Even then, even if you know the exact likelihood, unless the particle is entirely, 100% in one state, you cannot predict which state it will be in when you measure it. Despite this randomness, quantum mechanics has many clear laws, and some of these define the randomness, like Heisenberg's uncertainty principle.

Now, of course, using quantum effects to produce random numbers is a bit impractical. The GP's suggestion might be more practical, I don't know that particular method, but the general point still stands.

Re:Whatever happened to (3, Informative)

OneAhead (1495535) | about 6 months ago | (#45129011)

There are quantum effects involved in this process [wikipedia.org]. Quantum effects (more specifically wave function collapse [wikipedia.org]) are thought to be a source of true, inherent and perfectly unpredictable randomness. Throw that into a massive (from an atomistic point of view) chaotic system and you get a gigantic mess that is impossible to simulate with sufficient precision to predict the noise that comes out (and far, far beyond our computational means even if you don't care about precision).

I would say "read more about it here [wikipedia.org], but a lot of what's written there is inaccurate. Both resistor noise and avalanche noise have important quantum nature and classifying them under "physical phenomena without quantum-random properties" is factually incorrect. The second comment by user "agr" in the discussion [wikipedia.org] nails it. Pretty much anything involving electrons is quantum at the microscopic level.

Re:Whatever happened to (4, Insightful)

mark_reh (2015546) | about 6 months ago | (#45129019)

avalanche diodes conduct bursts of current at random times. A true random number generator simply measures time between those bursts of current then scales that value to whatever numerical range you need.

You can also time the clicks produced by a geiger-mueller tube detecting beta radiation from a radioactive source, but that requires a lot more difficult-to-integrate hardware.

Even if you base the final random number on a truly random source you have to ensure that the scaling routine doesn't introduce any sort of bias into the final value. THAT is the tricky part.

Re:Whatever happened to (1)

Z00L00K (682162) | about 6 months ago | (#45129563)

Use one GM tube for each bit, and let it toggle that bit on or off for each detected decay. To be really sure have different radioactive samples for each GM tube isolated from each other.

If you feel unsure - use two different isotopes, double the number of bits, then XOR them together and use the result as the random number.

Scale up to the number of bits you need.

If that's not random enough I don't know what will be constituted random enough.

To know a god's mind: First become your own god... (1)

VortexCortex (1117377) | about 6 months ago | (#45129069)

Yes, there are apparently laws that approximate how it would work, however to know the outputs one would need to have information about all the energy field states that make up the device's matter (and dark matter) at a given plank time.

In the past when I've needed a random number generator I've used a single infrared LED and IR Diode connected to a serial port. Poll the difference between the time it takes a photon to build up, be emitted, and detected multiple times and use the lowest bits as the random values. It's a lot slower, but it's basically the same principal -- Using the quantum uncertainty principal by observing quantum phenomena to generate randomness. In a real pinch, I just have the user move their mouse around like a loon...

Even if we had perfect equations to predict physical phenomena (we do not), we'd still need to know the initial state of the Universe and have 13.7 billion years of clock time (running as fast as absolutely possible) in a separate isolated universe to predict the random numbers with certainty... Any transfer of information between the isolated universe and our own would screw up the future calculations of randomness, due to the cascade of information entanglement.

Predicting the random numbers would be like trying to figure out what hash to include within this message right here [cf372106a91cd957d5a1046b57534485ddbdd4c0] so that when the message is hashed the the result would match that prior value. You could run the hash, insert it, but that would change the message's hash -- True, you could brute force the hash, but the message is entire Universe.

Let's see, we'll try everything before this paragraph using all zeros first, and see if that's the matching hash... nope. So, we'll insert the value we just got and hash it again: [211c7d73a0c66355921ef0dfb99019a2cce71754]? Nope... this could take a while, maybe we should start counting up from 0000000000000000000000000000000000000000 towards ffffffffffffffffffffffffffffffffffffffff to keep track. This is a SHA-1 hash, so it'll take 2^159 iterations on average, (or about 730,750,820,000,000,000,000,000,000,000,000,000,000,000,000,000 tries). You feeling lucky? Further, we could run through all the combinations and never get the message to be self hashing since the answer for above might not be in ASCII hex, or it might not exist in this message's problem space -- i.e., No solution may exist for the above universe. In other words: Our tool / assumption about the universe's laws / our representation of reality might not accurately reflect the true reality that the hash (or diodes) operate within, or our assumption that the problem is solvable may be wrong... So, you see, it's not perfectly random, but since we can't simulate the entire universe within itself, thermodynamics and quantum effects are random enough for our purposes -- Otherwise all lotteries could be considered rigged.

Re:Whatever happened to (1)

steelfood (895457) | about 6 months ago | (#45129215)

Somebody's gotta implement it in hardware. Do you trust Intel or AMD? I don't. If I can run an OSS analyzer on it and the results come out positive, I might be convinced. But I'm not sure this feature even exists for any consumer chips.

Re:Whatever happened to (4, Interesting)

gman003 (1693318) | about 6 months ago | (#45129237)

Well, Intel and VIA have such things integrated into their processors now. Unfortunately, they (at least Intel - not sure how VIA's implementation worked) decided to whiten the data in firmware - you run a certain instruction that gives you a "random" number, instead of just polling the diode. With all the current furor over the NSA stuff, many people are claiming that it *is* hacked.

The Article And Paper Titles Are Nonsense (1)

Anonymous Coward | about 6 months ago | (#45128693)

/dev/random and /dev/urandom are simply devices that are supposed to return a cryptographically secure continuously-reseeded pseudorandom number stream. /dev/random blocks when the entropy pool is depleted and /dev/urandom continues to produce a pseudorandom pad using the same underlying secure hash algorithm. There are, and have always been, "problems" with how the entropy remaining in /dev/random's pool is calculated insofar that it's the subjective call of whatever entropy provider is restocking the pool. Anyone who cares about that is using dedicated hardware random number generators. Everybody else uses /dev/urandom. The people who use /dev/random are usually those who don't realize that the only difference between it and /dev/urandom is that it blocks.

It's a waste of time to focus on quantifying the entropy metric that /dev/random uses to decide when it's time to block and wait for more entropy. The security of /dev/random and /dev/urandom come from the hash being used to munge the entropy. Various hashes have been and are being used, including SHA-1 and, god forbid, the NSA's favorite, Dual EC DBRG. That's where the weakness in /dev/random / /dev/urandom springs from, not any of that theoretical shit.

If you want to know whether or not your application is getting exploitable correlation-possible randomness from /dev/urandom, examine your kernel's drivers/char/random.c and see which hash it's using.

And what about other open source PRNGs? (0)

Anonymous Coward | about 6 months ago | (#45128705)

If the Linux one has questionable randomness then what does that say of the PRNGs in other operating systems, be they open or otherwise?

This is my contention: encryption on the Internet isn't necessarily weak because of the algorithms but because of key selection.

Thus even if we have AES256, maybe the real strength is AES100 or less.

Linus = on NSA payroll? (2, Interesting)

Anonymous Coward | about 6 months ago | (#45128799)

He "confirmed" he'd been asked to backdoor linux, he never confirmed whether or not he agreed... :)

Concerned and wondering! (1)

EzInKy (115248) | about 6 months ago | (#45128805)

Can anyone provide a link to a site that reviewed the source code and vetted the security of all RNGs?

So just build your own (1)

Anonymous Coward | about 6 months ago | (#45128863)

It isn't hard and works best when used to continuously reseed the kernel's entropy pool.

https://www.binarymagi.com/drupal/node/18

Randomness is an illusion (0)

Anonymous Coward | about 6 months ago | (#45128979)

The sequence "1 2 3 4 5 6 7 8 ..." belongs to the set of randomly generated numbers. But there are so many fools out there who would exclude such a sequence from that set. Which would then make that resulting set of random numbers biased.

Haven't these people got anything better to do with their time?

This is only for recovery after state compromise (4, Informative)

gweihir (88907) | about 6 months ago | (#45129021)

If the CPRNG state is not compromised, the Linux random generators are secure. In fact the property required for robustness is quite strong: Recovery from compromise even if entropy is only added slowly. For example, the OpenSSL generator also does not fulfill the property. Fortuna does, as it is explicitly designed for this scenario.

I also have to say that the paper is not well-written as the authors seem to believe the more complicated formalism used the better. This may also explain why there is no analysis of the practical impact: The authors seem to not understand what "practical" means and why it is important.

Achtung morons ! (0)

Anonymous Coward | about 6 months ago | (#45129033)

Only a fool believes in perfect security.

Insecure? (0)

Anonymous Coward | about 6 months ago | (#45129157)

Quick, someone validate RNG's usefulness before its feeling of self-worth is affected! I would, but I don't care.

Huh (-1, Flamebait)

Taantric (2587965) | about 6 months ago | (#45129293)

So I guess it is quite likely that the living avatar of all that is good and pure in the software development world, might have sold out to The Man after all? Where is your God now, Linux Fangirls?

Re:Huh (1)

Anonymous Coward | about 6 months ago | (#45129517)

It's good you posted this, because it just shows how desperate and ridiculous you are as a person that you'd rather cling on to this sensationalist news article rather then do any research into the topic.

Now everyone knows who to ignore in the future.

Random enough? (2)

duke_cheetah2003 (862933) | about 6 months ago | (#45129381)

I think the only question on my mind is what exactly is deemed insecure for? Generating public/private key pairs? Doing encryption for SSL/TLS?

I've been around computers for a good number of years and I know no computer can be truly random, but isn't there a point where we say, "It's random enough."? Is this OP saying.. Linux's RNG isn't "Random Enough." and my question is.. what isn't it random enough for?

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...