Hacker News new | past | comments | ask | show | jobs | submit login
Ruby Bug: SecureRandom should try /dev/urandom first (ruby-lang.org)
161 points by JBiserkov on May 3, 2016 | hide | past | favorite | 135 comments



This is sad and embarrassing for every involved party. The people leaving rude, entitled, hyperbolic comments in the thread, the Ruby developers who refuse to look into the best practice suggested by experts in the field, and the man page maintainer who refuses to update the man page in accordance with similar information.


Some of this is certainly my own fault (context: I'm the rude guy in the thread). There were two comments made by myself that were considered "rude": one early on, where I actually didn't even mean to be. And my last reply was bascially a rage-quit.

But I put all relevant information, academic and engineering-wise in the thread to try to convince Ruby-core to change their opinion. I replied to false assumptions and comments as best as I could.

I'm also only a human and since this bug has been open for 2 years, I've used SecureRandom extensively in the past, this was a very frustrating experience for myself and all the commenters involved. I certainly do not have the most "diplomatic" approach (as a friend put it). I know that. But I'm not really sorry about that either, it's just who I am. I'm a nice guy IRL people tell me, but I can get obnoxious when people don't listen to severe security issues and always refer to upstream, have been so in quite a few projects and standards processes.

I'll work on that, promise ;)

Aaron


Every interaction I've had with the Ruby core team has involved rudeness on their part.

I understand that Japanese culture is really different and that as a country they've really had horrible things done to them over the last hundred years that are inexcusable to say the absolute very least.

But just like any country, there are people with great people skills and people with no people skills. The Ruby core team lacks people skills. Whether it is a result of cultural problems that are a result of terrible things happening there, I can't say, but it really does Ruby a disservice.

All they need to do is listen and consider, but they don't, really.


I don't think it's a culture thing at all. Japanese people are usually extremely polite and sincere. I also don't think this has anything to do with their history.

If you look at replies I got from Ruby-core: some people would consider them to be rude as well; I'm constantly told I do not understand what I'm doing, and I've been in engineering for more than 12 years, into crypto for more than five (and been reading cypherpunk lists since I was 15). I'm certainly not an academic cryptographer nor among the best engineers in the field, but I think I know a fair bit about the topic by now. I've contributed to many security projects, academic publications and standards processes -- this was certainly among the worst experiences I've had so far (you'd think IETF is worse, no. heated discussions all the time, but people stay focused and technical, listen to comments made by domain experts et cetera and act on them).

The Ruby community even has their own acronym for being nice to other developers: MINASWAN (https://en.wikipedia.org/wiki/Yukihiro_Matsumoto). I'm puzzled by the outcome of this discussion, but am assured by other security engineers and cryptographers that bugs they opened were treated equally badly, often ignored, even if they were non-disclosed, heavy security issues.

No idea. I'm not part of Ruby-core, neither Japanese. In my travels I've encountered many cultures and peoples, Japanese are amongst the most polite and friendly people I've met. Often very shy in that regard, like many asians (this is indeed a culture thing & certainly not a bad one). Some are xenophobic, but I wouldn't say that they all are, that's just false, I've met so many open-minded Japanese that I'd never generalise in that regard.

Aaron


>I'm puzzled by the outcome of this discussion, but am assured by other security engineers and cryptographers that bugs they opened were treated equally badly, often ignored, even if they were non-disclosed, heavy security issues.

If you have organization with strict good manners policy and problems arise, this kind of passive-aggressive behavior is exactly what you should expect. "company policy" and PC rules can't make people better persons.

NOTE: This is not my opinion about Ruby-core team. I just want to point out that being polite does not mean that you interact well with others.


Insofar as Japanese culture has anything to do with the way these people replied to you, I suspect that it might be a combination of (1) deference to authority and (2) relative isolation from the Western CS scene.

Absolute deference to man pages and insistence on getting things fixed upstream are textbook examples of, well, following the textbook. And they won't accept blog posts and presentations as authoritative because they are not familiar with the authors and presenters. Had they been even a casual reader of HN, they wouldn't dismiss names like tptacek so easily. They simply have no idea who the heck he is, so they stick to TFM as they were taught to do.

So it's neither malice nor xenophobia. I think they're just following rules, and maybe a little annoyed that everyone is telling them to ignore their rules.


This debate has virtually nothing to do with me. If you want to put names to it, use Thomas Pornin and Daniel J. Bernstein.


Precisely.

"The Ruby community even has their own acronym for being nice to other developers: MINASWAN"

No, that is not true. That was coined, if I remember correctly, by the pickaxe.

Matz is nice, but how does this translate to any other bad suggestion out there? I don't understand that logic.

It's also not as if it is ... impossible to make suggestions to ruby core that are accepted?

Like hundreds other people manage? Why does the dude above fail?

Here is the issue tracker:

https://bugs.ruby-lang.org/projects/ruby-trunk/issues?set_fi...

You'll see a lot of assignee's done to matz, nobu, koichi etc... I mean they don't have 50 arms each and infinite time so they have to prioritize on what they work.

"Some are xenophobic"

That is so totally rubbish.

Just go to the japanese bboy scene. They are not xenophobic AT ALL.

https://www.youtube.com/watch?v=f5Y75Rjl6UU

They are people like YOU AND ME. Assuming that there is a huge, insurmountable cultural difference is just c-r-a-p.

Or do you think that every japanese loves video games? Or loves ninjas and samurai? Or knows karate?


I'd usually not even consider replying to such a post as you clearly have neither read nor tried to understand my comment.

I replied to a post made above, and expressed deepest sympathy for the Japanese people. While I was born in central Europe, I'd rather not be a citizen of any nation. I spend more time abroad than in the country I was born in. Speaking of which: central Europe currently has a huge resurgence of facist ideologies and xenophobia due to migrants from war-torn countries. Something which is utterly inconceivable to most people able to read a history book, given Europe's not so distant past with genocide. Unfortunately, xenophobia and right-wing sentiment is something that every democracy and thus nation faces, Japan isn't exempt from that [0] [1] [2].

(I currently live in Asia and spend a lot of time in Arabic countries, you may reconsider educating me on the subject with references to the bboy scene.)

Aaron

[0] https://en.wikipedia.org/wiki/Category:Far-right_politics_in...

[1] https://en.wikipedia.org/wiki/Political_extremism_in_Japan#R...

[2] https://en.wikipedia.org/wiki/Uyoku_dantai


[flagged]


As a language, Ruby is awesome. As a host for cryptographic applications, Ruby is hobbled by a very serious error that they refuse to correct.


Yea, I always liked Ruby's implicitness and style of writing code. For crypto I'd prefer to use Sage or Python's cryptography.io framework (a lot of other good options for performance critical code of course).


Agreed that tensions are running high. But I think the ruby devs are in the right here. They are just following the man page. What else is the authoritarian source?

How is one supposed to know who wrote those blog posts? Just because it is linuxexpert.com does not mean they are linux experts. It's funny, if people changed it randomly following blogs then people will claim this is some NSA conspiracy :-) How does one verify the person behind the blogs?

node has similar https://github.com/nodejs/node/issues/5798


> They are just following the man page.

I think Ruby devs' position is more than this. By keeping the faulty man page, Linux maintainers are implicitly communicating that they intended `/dev/urandom` to be a limited and less recommended way of doing things. The intention is important: even though `/dev/urandom` is actually better in the current kernel it may not in the future. It is not the only answer, as other languages did another choice, but it is perfectly reasonable to be conservative like this.

We badly need to change that intention, really.


Exactly what, do you imagine, Linux could do to make urandom less secure? I'm sorry, but the word for that concern is "nonsensical". Linus Torvalds has world-shattering conniption fits when developers make changes that hurt performance. Can you imagine breaking every one of the many security applications --- for instance, every Go program ever written --- that depend on urandom?

No, that is not a legitimate concern.


> What else is the authoritarian source?

Basically everything else. If you see how manpages are written, or just how often they are out of date, you quickly arrive at the conclusion that they are not authoritative source at all. Add in the political plays done in this area. Being stubborn over a wrong manpage is very frightening, especially in a security context. This thread shakes my believe in ruby as a language.


The authoritative source in Linux is the code and always has been.


Code is useless if no one knows how to use it properly and it's not communicated clearly. Users certainly can't be expected to read every line of code. That's like shipping a car with no user manuals and saying "take the engine apart and see how it works." Arrogance.

Clearly document system behavior or code is essentially useless.


Users of languages can't be expected to. Therefore the language designers and maintainers themselves, especially if they're working on the stdlib, should do so, IMO. It's not only education for a proficient programmer, it helps to understand the underlying system you're building on and it's security assumptions.

The random char device code isn't that hard to understand, and if you're not a strong C programmer (the Ruby-core people are good C programmers, I suppose) - there's a paper explaining how it works: https://eprint.iacr.org/2012/251.pdf

Aaron


Then I have a little quiz for you:

According to this documentation:

http://ruby-doc.org/core-2.1.2/Float.html#method-i-round

How do you explain?

2.1.2 :011 > 15.round(-1) => 20


I think that's hardly the case. If Linux users were reading the source code, we wouldn't have such an embarrassing track record when it comes to security. Major security issues like Heartbleed existed in code for years.


Agreed. The problem is the man page. Suggesting that one should read the source code instead of the man page is ridiculous. Might as well write your own kernel while you're at it if that's the case.


So once again, the man page for urandom creates more problems than solutions. (https://bugzilla.kernel.org/show_bug.cgi?id=71211)


Obligatory "How To Safely Generate A Random Number" link[1] that I always wind up posting in threads like these. There's also the getrandom syscall[2] which uses the /dev/urandom pool.

[1] http://sockpuppet.org/blog/2014/02/25/safely-generate-random...

[2] http://man7.org/linux/man-pages/man2/getrandom.2.html


Why are the man pages not so clear?


I'm not sure. I think part of the problem is that Linux still uses entropy estimation. This is also one of the issues I have with the proposed replacement[1][2] for Linux's current CSPRNG implementation since it still retains entropy estimations.

[1] https://lwn.net/Articles/684568/

[2] https://news.ycombinator.com/item?id=11561340 (HN thread for [1])



> There's also the getrandom syscall[2] which uses the /dev/urandom pool.

Sadly they just had to include GRND_RANDOM then compound that with GRND_NONBLOCK.


Why is there such a discrepancy between the prevailing sentiment on HN and the actions of whoever controls the manual? What is preventing one side from convincing the other, apart from stubbornness?

Also, why are the Ruby devs so dead set on the manual page?


It's not the prevailing sentiment "on HN". It's the prevailing sentiment among virtually all experts everywhere. I can make a list of those experts, but I'll leave it to someone else, so as not to bogart an lay-up high-quality upvote magnet comment.

The reason this is a problem is simply that the Linux maintainer is obstinately wrong. Theodore Ts'o was on HN a year or so ago, and in defending the current design (and man pages), made some incoherent arguments, such as that urandom might be OK for nonces, but not key generation (a head-scratcher for anyone familiar with cryptography engineering).


Cryptographers really are part of the problem. Do you ever Read the discussions in the metzdowd/crypto list? Nobody agrees on anything.


The metzdowd list is a running joke among practitioners. The direct answer to your question is "no."


I stopped reading these 'cypherpunk'-epilog lists a few years back. I'm still subscribed because of cross-postings. The amount of tinfoil-hattery is just unsustainable. And there's barely any interesting information for people working on real cryptography-, systems or engineering.


I didn't mean to dismiss the word of the experts. I only said "on HN" because that's where I'm familiar with your arguments, which I already put into practice by reading /dev/urandom when I need pseudorandom bytes.


I have no answer to the first one. Since urandom is periodically reseeded, I believe tptacek and others - at least they provide arguments that justify their position. The other side seems to be just silent.

As for "Also, why are the Ruby devs so dead set on the manual page?" - because it makes sense. If I didn't know much about random number generation, I'd rely on manuals and most widely adopted software. That means: urandom says it's not good for that usage, while openssl is used by almost every system. If you used a dependency that documents it does X and someone tells you that actually it does Y, but you have no ability to prove it either way, would you listen to the docs or random-internet-person.


    would you listen to the docs or random-internet-person.
I mean given that scenario as you describe it yes, obviously you go with the docs. But these aren't random-internet people. These are the acknowledged experts in the field. If you are writing crypto related code and aren't familiar with these people then there is far more wrong here than just this bug.


I think the problem is that this person is not writing crypto-related code. (or they don't think they do) They write an interface to N underlying interfaces providing RNGs and don't necessarily understand how they work internally or the theory behind them. And that's kind of ok - they made it work correctly, but it could be better.

We don't expect person writing a Ruby binding for sqlite to know the theory behind database indexes and who are the acknowledged experts in that field. Yet we seem to expect that from a person writing binding for RNGs.


> We don't expect person writing a Ruby binding for sqlite to know the theory behind database indexes and who are the acknowledged experts in that field.

We don't? More to the point, do we expect them to close a bug related to those things without educating themselves on the subject enough to make a rational decision on it?


I don't think it's reasonable to expect they already know that. Whether they should educate themselves... that's up for discussion I guess. The maintainer here effectively says: I read the docs and don't see a reason for a change; if the docs are wrong, prove it by changing them, if there's a bug in dependency handle the issue there.

It's not the best solution, but if I ignored everything I know about this issue, I think it's a reasonable maintainer's approach.


The probability distribution function of average coder understandable-documentation decays exponentially with subject matter expertise, in most cases. Furthermore, it is often semi-rational, usually unconscious non-/for-profit "job-securitization" to lay landmines and obscure functionality to reinforce in-group, elite, "arcane knowledge" hoarding and seem "expert."

A startup founder often wants to work themselves out of a job to do more useful things, most will do all they can to insulate themselves into indispensability (a lot of technical people are super insecure, I'm proly one of them, and a lot of corporate gigs are cut-throat fiefdoms).


I think it would be entirely reasonable for the person maintaining a Ruby binding for sqlite to follow the sqlite documentation and close any bugs telling them to go against that documentation, yes.


It goes against the Linux kernel documentation, yes. As has been pointed out to them, it doesn't go against the documentation on OS X or any of the BSDs. But they engage in the same behavior on those systems.


Well, perfectly reasonable to take the intersection of capabilities rather than special-case each platform. AIUI the behaviour of /dev/urandom and /dev/random is the same on OSX and *BSD, so there's no problem with their behaviour there.


Except they totally are writing crypto related code. It's called SecureRandom for a reason. If I as a user of Ruby see that code I have certain expectations regarding that. If the Ruby coder maintaining that code is not capable of doing the work required to satisfy those expectations then as I said there is much more wrong than just this bug.

It's reasonable to make this mistake initially sure. But when it's pointed out to you that the documentation is flat out wrong for reasons outside your and others control and you are pointed at articles and research by the experts in this field and you elect to ignore them then the security flaws that will result are very much on you just as much as they are on the linux man page maintainer.


> these people

Which people? Could you please name some of them? Honest question - the only name I can think of right now is tptacek's, but I'm sure there are more.


djb, Adam Langley, Dave Wagner, Thomas Pornin, among others.


Well, to be honest, this isn't the man page creating problems; this is the Ruby core team stubbornly refusing to fix a known problem. Using the man page as justification for their refusal is both stupid and weird. Who uses the Linux man page to determine their security?


Well, manpage has always been the source of information, the only better one is kernel source. I think majority would choose it over what some random (an average user doesn't know experts from crypto field by their names) people wrote on a forum on the Web.


I'd hope the Ruby core team is slightly above average users, and even if not, they could easily verify the claims. They selected not to purely out of stubborness and an unwillingness to admit they were wrong.


I really don't like the sentiment of "We shouldn't change it unless the man page says we can". That's exactly the kind of senseless bureaucracy that the open source community should be avoiding.


The manpages for syscalls and device nodes are basically their API specifications. They tell you what behaviours can be guaranteed (i.e. what properties are a part of the interface contract) and what behaviours cannot (i.e. what properties are implementation details of the current implementation, and could change at any time.)

Just because the implementation of /dev/urandom has good properties that make usable as a system-wide CSPRNG, doesn't mean that it's supposed to have those good properties. Only the docs can make that assertion. Without such an assertion, the kernel is free to make /dev/urandom have less-sound properties in a future release—because there's no spec saying they shouldn't. (They try not to break userland code, yes, but userland code only relies on things the docs say it can, so...)


> They try not to break userland code, yes, but userland code only relies on things the docs say it can, so...

Judging from the odd Torvalds-yelling-at-people mail getting linked on HN, it doesn't sound like "it's ok to break userland here because clearly they didn't read the docs" would fly.


The proposed change --- unify urandom/random and permanently lose blocking behavior --- can't possibly break userland. There is no way to write a program that depends on /dev/random blocking, because it does so at random.


I've seen you post that a few times, so two minor nitpicks.

In the mental model in which the /dev/random manpage lives, it makes (some) sense to read /dev/random until you get a short read, then make a userland pool from all the data you've read, "to get as much entropy as possible". That reasoning doesn't actually make sense, of course, but I would be surprised if nobody thought that such a construction would be a good idea.

Separately, there's quite a lot of code - and I've written some - that reads a byte from /dev/random at daemon start, specifically to block until the system has built up enough entropy. FreeBSD blocks reads, even from /dev/urandom, until the [EDIT: estimate of how much entropy has ever gone into the pool] gets high enough, which makes some sense; but there needs to be some way to block-until-random.


FreeBSD blocks until the generator is seeded. I think Linux urandom should do the same thing. But once it's seeded, there is no reason for the generator ever to block again.


Yes, that's what I meant. Edited for clarity, thanks!


On 11.x random routinely unblocks before the userland is even started, so from an application standpoint random really never blocks.


It would be nice if Linux man pages were on some level authoritative, but they aren't being maintained by the kernel developers and it's well known that they don't really live up to traditional Unix/BSD standards.


The part of the man page that can be considered a spec is the part that describes the behaviour, and the arguments in favor of always using /dev/urandom over /dev/random are based on the described blocking behaviour.

I'm fairly sure they would have to change the nature of /dev/urandom in such a way as would invalidate the other parts of the documentation in order to make the erroneous parts of the man page contain valid concerns.


Huh? So you want them to make code changes which is against the man page recommendation? Why have man pages at all then? Do you expect ruby developers to dig into kernel source code and more importantly actually understand what is in there?

If you think they should take the word of 'experts', how does one qualify as an expert? How do you verify that the expert is the one who wrote the blog post or mail etc etc.

The process is already there. Get the change for man page in kernel and user space will follow suit.


> Do you expect ruby developers to dig into kernel source code and more importantly actually understand what is in there?

Yes. Anyone maintaining and distributing the security related code of a language runtime should understand as much as humanly possible about what is actually going on from the application level all the way down to the hardware.


To make this discussion really useful: what's the easiest way for a Ruby application to make SecureRandom use /dev/urandom, without waiting for Ruby-core to be convinced?


  require "securerandom"

  module SecureRandom
    F = File.open("/dev/urandom")

    def self.random_bytes(n = nil)
      F.read(n || 16)
    end
  end


Why not just make the argument default to 16? Is something non-obvious going on?


The caller may pass nil explicitly.


I don't understand. You may still pass nil explicitly if the argument defaults to 16?


Yes:

  def f(n = 16)
    n
  end

  p f # => 16
  p f(nil) # => nil


Why haven't the urandom man pages been fixed yet? Is there, like, no process for fixing man pages at all, they just wind up set in stone forever?


If you look at the bug filed against the man page, it doesn't exactly make a strong case. It just suggests some weasel words, about qualifying "large amount of data" etc. (https://bugzilla.kernel.org/show_bug.cgi?id=71211)

So bug report understates the issue, nobody has gotten around to writing a good quality patch, everyone is just loudly complaining elsewhere.


Actually, I'd say that issue/bug filed two years ago _does_ make a strong case that the current man page is insufficient, that it's actually the current man page that's full of ambiguous language and "weasel words".

But you may make a good point that perhaps nobody has submitted a good patch yet -- you're right that issue isn't a good patch, just an invitation to enter into a discussion toward one (an invitation that doesn't seem to have been taken up, at least on the issue tracker).

One would hope that the actual maintainers of the kernel subsystems would feel responsibility to improve a clearly insufficient man page. Which makes me suspect there are some underlying politics or personality conflicts going on. (Or just burn-out?)

But if all it needs is someone to write some good text -- is it really the case that none of the cryptographers who have been writing extensive blog posts about this for years care to submit a doc patch? If so, I wonder why?

Perhaps, just guessing, another potential issue is that nobody really wants to _take responsibility_ for such text, in case they make a mistake. So the existing clearly insufficient text remains. Tragedy of the open source commons?


> it's actually the current man page that's full of ambiguous language and "weasel words".

The report suggests "clarifying" what the man page means by the (completely incorrect) statement that "Users should be very economical in the amount of seed material that they read from /dev/urandom"

Probably the reporter was just being polite, but in the absence of other comments in the bug or any kernel developers weighing in, it just sounds like an editorial suggestion coming from a single Linux user. Remember that the man-pages project is separate from kernel development.


Yes, it reads to me like they were just being polite. The first couple sentences make it clear they think /dev/urandom is the right tool for "daily tasks" and that the man page is misleading people into thinking otherwise.

They begin with the most indisputable ways the man page is insufficient, as a way of starting the conversation -- which was never taken up.

Perhaps an actual patch would have been just accepted? Maybe the problem is the issue-submitter assumed there was someone on the other end who understood the kernel features and was interested in discussing making the man pages better.


You know your language is bad when PHP -- possibly the most hated web scripting language on the planet -- does it better.

Come on Ruby, get your act together. You can do better, right?

Also, https://github.com/openssl/openssl/issues/898


"securerandom.rb will consume too much entoropy if /dev/urandom is used directry."

This is a quote for the ages.


Regardless, whether it be OpenSSL first or /dev/urandom first, "haveged" entropy generation is still needed for virtual machines, which is the most common use case. In my experience both ways are a no-go without special measures like haveged being put in place to super-charge the available entropy. I've had processes hang up for MINUTES or just under TWO HOURS waiting for more entropy to be available.


No, it's not. Don't use haveged. Virtual machines should simply get their initial entropy from the already-seeded urandom pool of their hypervisor host. Processes don't "hang waiting for more entropy"; they hang because /dev/random inexplicably goes on strike. The answer to that problem is "never use /dev/random".


true but even if we don't want to use /dev/random, there's still software using it all over the place that we don't necessarily want to patch.

I end up installing haveged just because I don't want the system mysteriously locking up because some random daemon wants to create a 4096 bit key on first startup.


Replacing random with urandom for one app is just one LD_PRELOAD away. Similar to https://rafalcieslak.wordpress.com/2013/04/02/dynamic-linker... you can replace open("/dev/random") with open("/dev/urandom")


Its much easier and less error prone to just replace the device node.


I'd rather go for a limited scope. But yes, one way or another, you don't have to suffer just because someone hardcoded /dev/random in the app.


That's one of the solutions, but not the only one. QEMU's solution is another virtio device: http://wiki.qemu.org/Features-Done/VirtIORNG I expect XEN and others to follow soon.

Also RDRAND can be executed in a VM, but I'm not sure if it is handled properly yet.

And like tptacek mentioned - why did you use blocking random in the app?


Actually it's still kind of an issue. The qemu virtio device reads only from the hosts /dev/random device (not urandom) so you can still starve the hypervisor. Also the guest -- even though is properly seeded -- can entropy starve because who knows what is installed on it. And clearly there's still confusion about using /dev/random.

I don't even think the solution is to point everything to /dev/urandom. Why maintain two? Why constantly have to explain to people the difference? The BSD developers merged both devices into /dev/urandom and I think that's the right approach.


It's configurable. You can even forward host's /dev/urandom as guest's /dev/random.


Actually it's not supported with virtio-rng. Here[0] is a proposed patch to add support.

[0] http://www.redhat.com/archives/libvir-list/2016-March/msg010...


I knew you can change the source (it accepts egd after all), but didn't know they actually prevent you from choosing urandom. This is sad :(


That VirtIORNG documentation page is really sad though, with all the warnings and quirks about the guest starving the host of entropy. The source end of that device should be fed by something like fortuna, or attach directly to urandom if available.

The way this currently looks to me, is that it all but ensures poor adoption rates.

bhyve on FreeBSD has virtio_random, and it simply hooks up to /dev/random (which on FreeBSD is nonblocking and can't starve).


This whole situation has annoyed me enough to start writing this gem:

https://github.com/technion/use_urandom

Note I said "start", so I'm aware it needs work. Feedback is appreciated however.


I'd suggest you just use the `rescue` definition from SecureRandom.

Then you don't have to do any work yourself.

    def SecureRandom.gen_random(n)
      ret = Random.raw_seed(n)
      unless ret
        raise NotImplementedError, "No random device"
      end
      unless ret.length == n
        raise NotImplementedError, "Unexpected partial read from random device: only #{ret.length} for #{n} bytes"
      end
      ret
    end


Is there a good random library that's not the giant ball of death that's OpenSSL?


Depends on why you want to use a random number generating library. What are your requirements and specifically what are your requirements that aren't satisfied with the system's csprng. Once you have that defined, you can start looking at what algorithms you want and from there what libraries implement it.

Edit: or to put another way, first define what is it that linux's getrandom() and /dev/urandom doesn't provide you.


This talk about C++ and rand being harmful is quite educational: https://channel9.msdn.com/Events/GoingNative/2013/rand-Consi...

What it points out is that even a good random number generator can be used incorrectly, and without the right tools your efforts to produce truly random numbers are doomed from the start.

C++ has an embarrassing wealth of random number generators. The Ruby core has almost nothing that can measure up to that, yet it seems like a huge oversight.

I like that C++ has a generator for many different use cases, they all have their reason for being there, but Ruby has a singular one with unknown properties. Porting over what C++ has and making a proper Random library for Ruby would make a lot of sense here.


None of those C++ RNGs are suitable for cryptography. The answer for C++ CSPRNGs is, like in every other language, to use /dev/urandom. Multiple CSPRNGs just mean multiple single points of failure.


Not every random number generator has to be cryptographically secure. Sometimes they just need to be properly random.


ISAAC (http://burtleburtle.net/bob/rand/isaacafa.html) for a CSPRNG. But what is your use-case for something non-standard? Just use /dev/urandom


ISAAC is a DRBG. It's like 1/3rd of a complete CSPRNG, and it's the easiest third to build. To use ISAAC securely on Linux, if for some reason you actually wanted to do that, you'd swap out the hash DRBG inside the kernel LRNG with ISAAC, and then continue using /dev/urandom.

Simply replacing the OpenSSL RNG or /dev/urandom in userland code with the ISAAC routines is likely to blow your app up.


How about, backing the equivalent of /dev/urandom for a unikernel OS?


No, no, no, don't use ISAAC.


Do you have a citation you can provide? I don't know of any research that has shown a significant weakness in ISAAC.


Why would you use a PRNG with unknown cryptographic properties, not designed by a cryptographer, as opposed to one of the NIST's DRBG or a good stream cipher, such as ChaCha?

Weakness: https://eprint.iacr.org/2006/438 — "huge subsets of internal states which induce a strongly non-uniform distribution in the 8192 first bits produced"

Finally, why is deterministic PRNG suggested as a replacement for OpenSSL's random number generator? In general, the advice to write your own userspace PRNG replacement for OpenSSL is not a good advice, because many people are not competent enough to do it.


> as opposed to one of the NIST's DRBG

I certainly wouldn't go anywhere near another NIST DRBG..

> https://eprint.iacr.org/2006/438

From my brief understanding Aumasson's paper uses a different seeding routine from the example provided in the c implementation which allows the weaker states to be produced - indeed it's mentioned on the author's website.

> Finally, why is deterministic PRNG suggested as a replacement for OpenSSL's random number generator? In general, the advice to write your own userspace PRNG replacement for OpenSSL is not a good advice, because many people are not competent enough to do it.

If you read my above post I clearly do not suggest this.


> I certainly wouldn't go anywhere near another NIST DRBG..

OK, this is an instant red-flag to me to get out of the conversation.


> OK, this is an instant red-flag to me to get out of the conversation.

Perhaps you are unfamiliar with? https://en.wikipedia.org/wiki/Dual_EC_DRBG


I'm familiar with it. CTR DRBG, Hash DRBG, HMAC DRBG are all fairly solid designs.


I've used Mersenne Twister[0] in the past.

[0] http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html

If you're talking about an RNG library for cryptographic purposes, then it depends on your use case.


It does not depend on your use case. The answer, in all cases, is just to use /dev/urandom.


I had a rather specialized case where it was the pragmatic choice (note, not technically required): running a lottery with potentially litigious losers.

If you used a CSPRNG with a seed space smaller than the set of possible lottery outcomes, losers could argue (misleadingly, since we still couldn't feasibly bias the result) that not all outcomes were equally probable and try to get the results thrown out. That is, the fact that there are widespread misconceptions about /dev/random can very rarely be a reason to use it :P

However, I agree that the rule is that you should just use /dev/urandom.


Why not use a hardware rng in that case? Seems a lot safer if you have to deal with litigious people.


We did, in a way. One of the sources used was random.org (uses radio receivers tuned to static from atmospheric noise: hardware RNG as a service). I also had less than 3 weeks to take it from proposal to production.

Combining two independent sources obtained by different people and using a cryptographic commitment scheme ensured that 1) no one person could fix the results or make it nonrandom (protection against Eddie Tipton-style attacks), 2) if at least one of the independent sources was random, the result would be.


Then what CSPRNG do you use? Any that has seeds larger than 256-bit?


Anything that reseeds during operation can qualify. In fact, if the CSPRNG's internal state isn't large enough, you need to periodically reseed or face the same objection.

But a CSPRNG which you need to explicitly seed with random bits as big as your output isn't providing much value (simply whitening) since generating the seed is the same problem you had before adding the CSPRNG. So you end up looking at a TRNG.


Are there not cases where blocking if there is insufficient entropy is the correct thing to do? Particularly if we're talking about important random numbers like long-lived private keys.



It must be possible for a computer to simply not have an adequate supply of randomness though, no? Sure, urandom will use hardware sources of randomness if they're available - but what if they're not?


No, this is a misconception. Once the LRNG is seeded, it can generate a more or less unbounded amount of high-quality random bytes from that seed. Entropy can't be "depleted".


Then how can a 4096-byte key be meaningfully safer than a 1024-byte key? Couldn't you just use the 1024-byte key to seed a RNG, generate a 4096-byte key from that, and have 4096-byte safety?

(Also what if the linux RNG isn't seeded e.g. in a VM scenario? Or is the answer just "don't do that"?)


mt19937 (or preferably SFMT¹) is a great library (and fine for cryptographic purposes if you have a sufficiently random entropy source with which to seed it, and reseed it frequently from your entropy source), but you still have to seed it somehow which is where /dev/(u)random comes in. So it really doesn't solve anything.

--

¹ http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.h...


Have you seen the php mt_rand cracker?

http://www.openwall.com/php_mt_seed/


That one needs manual seeding too, which may lead to more issues.


As others have said it depends on your needs[6] and whether or not it has to be a CSPRNG (cryptographically secure). Since you mentioned OpenSSL I'll assume that in this context we are talking about a CSPRNG.

The short answer is to just use the OS provided one if available. Linux has /dev/urandom[1] and the getrandom[2] syscall and Windows has RtlGenRandom[3][4] (there's also CryptGenRandom on Windows which is the "official" version but is more awkward to use since it requires a CSP context to use). On Windows this uses AES-256 in CTR mode as specified in NIST 800-90.

Outside of those (if for whatever reason[6] you're not satisfied with just using what the OS provides you) you can also take a look at how BoringSSL does things[5] (and LibreSSL although I'm less familiar with it). It uses a ChaCha20 instance to filter rdrand output (if supported, otherwise it just uses the system CSPRNG directly). The system CSPRNG is used to key the ChaCha20 instance and then for every call to RAND_bytes it uses rdrand output equal to however many bytes you requested and that gets essentially filtered through the ChaCha20 instance. It's fast and pretty simple. I found the code to be pretty easy to understand[5]. I believe LibreSSL does something similar with ChaCha20 although I'm not sure it uses rdrand.

I think there should be a pretty good reason if you're choosing not to use what the OS provides for you.

tl;dr Just use /dev/urandom (or the getrandom syscall) on Linux and RtlGenRandom on Windows.

[1] http://sockpuppet.org/blog/2014/02/25/safely-generate-random...

[2] http://man7.org/linux/man-pages/man2/getrandom.2.html

[3] https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...

[4] https://boringssl.googlesource.com/boringssl/+/master/crypto... (RtlGenRandom requires slightly special definitions to work).

[5] https://boringssl.googlesource.com/boringssl/+/master/crypto...

[6] There really isn't a "depending on your use case" decision to make. /dev/urandom (or RtlGenRandom) is the correct choice for all cases.

With that being said there may be two small caviats to that. Whether or not it has to be a CSPRNG and whether or not /dev/urandom (or RtlGenRandom) can provide the throughput which you require. In certain cases you may have to expand /dev/urandom with something like ChaCha20 if /dev/urandom is too slow (the BoringSSL devs mentioned that for AES-CBC IV generation on servers this can sometimes be the case, see [5] for the BoringSSL implementation).


/dev/urandom is a shared resource across all processes, and that implies locking/synchronization that can run you into scalability issues if you are trying to generate a large volume of random numbers in parallel on multiple cores.


> trying to generate a large volume of random numbers in parallel on multiple cores

You may be better off just using rdrand directly in this case depending on the throughput required. AFAIK you should be able to saturate all logical threads generating random numbers with rdrand and the DRNG (digital RNG) still won't run out of entropy.

Or as I also suggested look into expanding /dev/urandom output using a ChaCha20 instance like what BoringSSL does (which also combines it with rdrand since it's fast).


I'm surprised there's no talk of essentially making reads from /dev/urandom occur via a VDSO that grabs a seed for each process from the common pool, and then runs entropy generation from then on in the process's address space.


You really want a separate instance of the CSPRNG per thread, not per process.


You can also use the hw rng if supported (e.g. rdrand) for fork/VM duplication safety. /dev/urandom should be fork safe though so long as you're not buffering data from it.


JRuby uses Java's java.security.SecureRandom, which (by default on OpenJDK 8) uses /dev/random, mixed with its own SHA1PRNG (this so setSeed() can be guaranteed to actually do something).

Rubinius copies MRI circa 2014 via rubysl - https://github.com/rubysl/rubysl-securerandom


Does it really? From https://bugs.openjdk.java.net/browse/JDK-4705093 :

> If you call: new SecureRandom() on Linux and the default values are used, it will read from /dev/urandom and not block. (By default on Solaris, the PKCS11 SecureRandom is used, and also calls into /dev/urandom.)


I ran strace on both OpenJDK 7 and 8. It appears to open both /dev/random and /dev/urandom, but it only read from /dev/urandom. From reading the source code, it appears that /dev/random is used if you call SecureRandom::generateSeed, but /dev/urandom is used for SecureRandom::next.

Of course, you can change a whole heck of a lot of stuff by setting properties on the command line or changing $JAVA_HOME/lib/security/java.security.


Yep, shouldn't have second-guessed myself. Teach me to comment at 4am.

http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/s...


I was recently told about a method to generate a random number using something called "quadratic residue". There was too much math for me to understand it. Is anyone familiar with this?


[deleted/retracted]


Yes, it's just you.

The core ruby team is mostly Japanese-speaking, and Japanese developers are less likely to speak English well even than people in a lot of other countries where English is not a common first language. I think that does serve as a barrier to communication and transparency (with/towards English-speakers!). I think that's all there is, nothing "about Asian culture", or "decisions are made as a group rather than on an individual basis," or cliquishness.

We English speakers have the luxury of seeing almost all open source development happen in our first language. Most of the world doesn't. Ruby is an exception, with a mostly Japanese core team. Just how it is.


Thanks for the enlightenment, I didn't know this aspect of the Ruby core team.


Yeah, I too find it hard to figure out exactly who the ruby committers are, or what goes on amongst them. I believe this is simply because of that language barrier. From what I do know, the way MRI is developed is not at all atypical or unusual compared to most other large open source (primarily English-language) projects.

Here's a useful (to those curious to learn more about ruby committers) interview from 2013 with 15 ruby committers, note the questions and answers have been translated between English and Japanese. http://www.sitepoint.com/meet-fifteen-ruby-core-committers/

My impression is that a lot of discussion on MRI happens in Japanese on Japanese listservs. Of course, it doesn't take a language barrier to make things seem not so transparent -- for comparison a few years ago, Rails core team started taking the majority of their discussion to private core-team-only listservs. (I gather because they were finding dealing with the peanut gallery made it too hard to get things done/decided). At least there's still lots of open (and English-language) communication on the github issues or open listserv, but there are often times when I'm not really sure who or how or on what basis architectural decisions are made for Rails too, a lot is done in private, is my impression.

But I suspect to curious Japanese-speakers, MRI development may even be _more_ transparent than Rails is to English-speakers! I think (but not entirely sure) all of the listservs MRI devs use to collaborate are actually open -- you just have to read/write Japanese to understand what's there or engage effectively. :)


Well, I dunno what the man pages say and what some dudes on the internet say, but I know that on CentOS release 6.7 (Final) I need to run with -Djava.security.egd=file:///dev/urandom or else I can't connect to Oracle half the time because Java runs out of entropy. So there must still be some difference between /dev/random and /dev/urandom.


Nobody's claiming there's no difference between /dev/random and /dev/urandom. They're just saying you should always use /dev/urandom.


Not sure what up with all the downvotes. I'm just saying that Java defaults to /dev/random and Java is kind of a big and important and all that, so not sure why everyone is giving crap to poor little Ruby about this.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: