Hacker News new | past | comments | ask | show | jobs | submit login
BoringSSL (imperialviolet.org)
303 points by tptacek on Oct 20, 2015 | hide | past | favorite | 112 comments



"Random number generation in OpenSSL suffers because entropy used to be really difficult. There were entropy files on disk that applications would read and write, timestamps and PIDs would be mixed into entropy pools and applications would try other tricks to gather entropy and mix it into the pool. That has all made OpenSSL complicated.

BoringSSL just uses urandom—it's the right answer. (Although we'll probably do it via getrandom rather than /dev/urandom in the future.) There are no return values that you can forget to check: if anything goes wrong, it crashes the address space."

There is a meme that pops up from time to time on HN, that "/dev/urandom is okay for stretching out random data but not sufficient for random seeds", that I've never understood. tptacek seems to be consistent in suggesting, "Just use /dev/urandom. Period." And it looks like people who also have a lot on the line concur.

If this is the case, I'm wondering if we'll see a day when gpg might migrate from /dev/random to /dev/urandom?


It's just "because entropy used to be really difficult!" We now have relatively simple and proven algorithms (Yarrow, Fortuna) which can take 256-bits of entropy, and deliver effectively infinite streams of high quality entropy from that core without leaking any of the seed, or skewing the output. In such a system, pulling bits out of the core entropy bucket is counter-productive.

As djb says;

  Cryptographers are certainly not responsible for this superstitious
  nonsense. Think about this for a moment: whoever wrote the /dev/random
  manual page seems to simultaneously believe that

    (1) we can't figure out how to deterministically expand one 256-bit
    /dev/random output into an endless stream of unpredictable keys (this
    is what we need from urandom), but

    (2) we _can_ figure out how to use a single key to safely encrypt many
    messages (this is what we need from ssl, pgp, etc.).

  For a cryptographer this doesn't even pass the laugh test.

BSD and OS X are more modern than Linux in their choice of CS-PRNG; they used to use Yarrow, and then added Fortuna support in 2014 (behind a config flag) and will make it default in 11.0. Linux has used largely the same CS-PRNG for /dev/urandom since 1994, with some upgrades to their mixing function, to prevent DoS against /dev/random, and upgraded recently to include random bytes extracted from "RDRAM" provided by Intel CPUs.

I'm not sure why we haven't seen the underlying CS-PRNG switched wholesale to Fortuna in Linux. For example, they are still using a SHA-1 based output hash, which seems like it might be time for that to go.

The higher level take-away is that trying to avoid the modest code complexity of the CS-PRNG by going directly to /dev/random is more likely to lead to an exploitable catastrophic failure than sticking with /dev/urandom.


The advice of /dev/urandom vs. /dev/random is case-specific and doesn't apply to all operating systems. For Linux systems, in general, it would appear it doesn't matter and /dev/urandom is sufficient.

However, on systems such as Solaris, the cryptography group (whom I know personally) specifically recommends the use of /dev/random in some cases. Those cases are largely limited to specific cryptography requirements such as a hard requirement for high entropy sources.

Regardless, all of this is largely moot at this point, as all *NIX-like systems (including Solaris) seem to be standardising on the getrandom() (in Linux first) and getentropy() (in OpenBSD first) which sidestep many of the more subtle, potential issues.


Can you personally ask the Solaris cryptography group why, in detail they think this is the case? No explanation I've seen for this makes sense.

In particular: the Solaris documentation suggests that urandom is fine for nonces, and /dev/random for long-term keys. But that's the opposite of the normal threat model for randomness! You care most about high-quality continuous-availability randomness for the nonces, where biases can be devastating to security, and attackers get repeated continual bites at the apple.

If someone can explain the logic here, I'll update the page that AGL linked to in this article to account for that.

I think the Solaris people are wrong. I'm not sure, but I'd be willing to bet a small amount of money on it.


This is a very interesting point. An RNG that fixes the high bit of every word to zero still makes reasonably secure private keys, but start signing stuff with DSA and that RNG and you're screwed.


Can you personally ask the Solaris cryptography group why, in detail they think this is the case? No explanation I've seen for this makes sense.

I have, and again, most of the reasoning is Solaris-specific.

The primary difference between urandom and random is currently higher quality re-keying for /dev/random, but there are a few internal implementation differences as well.

Fundamentally, it's about constraints placed on the implementation that currently make /dev/random more "secure". In addition, for Solaris /dev/random (and getrandom(GRND_RANDOM)) when the administrator explicitly configures a specific validation mode then the bits you get come from a validated DRBG (Deterministic Random Bit Generator). /dev/urandom is not affected by that validation mode.

I think the Solaris people are wrong. I'm not sure, but I'd be willing to bet a small amount of money on it.

They're really not in this case, but in fairness, you'd have no way of knowing about this without source code access and a few decades of knowledge about Solaris crypto.

You can find a summary from two years ago written by one of the primary Solaris cryptography engineers and architects here:

https://blogs.oracle.com/darren/en_GB/entry/solaris_random_n...

...they intend to update it soon for the next Solaris release.


Yeah, I've read that page. It doesn't explain what makes /dev/random better than /dev/urandom. I'm not trying to be obtuse: I'm telling you, the Linux kernel maintainer and architect for random/urandom also believes that urandom is inappropriate for long-term keys on Linux, and he is also wrong. So that particular argument from authority is not compelling to me.

From what I can tell, the situation on Solaris is very similar to that on Linux:

* there are two separate pools that service random and urandom,

* but both use very similar generators (in particular, both use CSPRNG DRBG constructions);

* some additional care is taken in random's case to ensure initialization (a good thing)

* but that care doesn't matter once this system is fully booted,

* and random is more aggressive about reseeding,

* but that only matters for post-compromise forward-secrecy, the need for which in this case implies you are completely boned anyways.

I'm standing by what I said before: use urandom, to the exclusion of all other generators, very much including Solaris.


both use very similar generators (in particular, both use CSPRNG DRBG constructions)

Except /dev/random, which can use a different generator when the administrator configures a specific validation mode. Again, specific cryptographic requirements.

that only matters for post-compromise forward-secrecy

Not all cryptographic requirements are purely technical or even "ideal"; some reflect specific policy and/or customer requirements based on their specific cryptographic needs.

I'm standing by what I said before: use urandom, to the exclusion of all other generators, very much including Solaris.

To you, this advice is based on "authority"; to me, it's based on my long-term friendship and working relationship I've established with them over the years and my respect for them as engineers/developers/crypto experts. These are people that would run circles around most "geeks" and have been doing crypto for a decade or more and they have both the pedigree and track record to show for it.

So in the end, you're of course free to believe what you'd like, and while I agree that your advice is generally reasonable, I continue to assert that it is wrong in specific cases for Solaris. As I already pointed out, and you seem to have ignored, the results you get from /dev/urandom are not the same as /dev/random in certain configurations.

As I said before, the Solaris crypto team intends to update the blog post I linked before with more details at a later date. Perhaps that will have the elusive information you're looking for.


I'm suggesting that there are no such cryptographic requirements. I'm not alone in making that suggestion; you can, for instance, look up Thomas Pornin's comments on Crypto Stack Overflow for a similar discussion.

I'm sticking with this argument because it is a common one. There's a widespread belief that there are low-quality and high-quality random numbers (or, if you must, "random numbers suitable for one kind of cryptographic application" and "those suitable for another"). I'm pretty sure this is an urban myth.

To me, in this case, the smoking gun is the Solaris team blog post that suggests urandom is appropriate for ephemeral and short-term secrets and nonces, and than random is appropriate for long-term secrets.

Unless they're trying to communicate that random is worse than urandom, and so it's safer to use it in offline scenarios, but not in demanding online scenarios, they have the threat model exactly backwards.


I'm suggesting that there are no such cryptographic requirements.

There are indeed such requirements; look up the requirements for FIPS validation. As I mentioned before, some requirements may not be purely technical in nature. You're free to feel they're not necessary, but some organisations clearly do.

Unless they're trying to communicate that random is worse than urandom, and so it's safer to use it in offline scenarios, but not in demanding online scenarios, they have the threat model exactly backwards.

They're not; I think it's just how you're personally interpreting the text. We're just going to have to agree to disagree.


I've spent 20 minutes reading this thread. You have never actually said anything other than, "I have friends who tell me this is the way it is so I trust them." and "there are Solaris specific requirements that make Solaris different."

tptacek has responded with specificity, actually detailing how the Solaris team is, in fact, wrong in their guidance.

From my perspective we have a two-way conversation in which one person is arguing about what their friends say, and the other is trying to talk detailed cryptographic requirements.

It might help if your Solaris friends could identify what those FIPs requirements are, and how /dev/random fulfills them, but /dev/urandom does not. That would be interesting.

After all - the Linux team, who presumably also consists of pretty smart people, also believed they were right regarding /dev/random versus /dev/urandom, and it turns out it wasn't the case with them either.


tptacek has responded with specificity, actually detailing how the Solaris team is, in fact, wrong in their guidance.

No, tptacek has responded with a view that suggests beliefs about how they are wrong in their guidance, but has done so without 1) access to the actual implementation 2) the decades of experience of implementing and architecting it.

tptacek is certainly free to express opinions; but respectfully, I will trust the individuals that have implemented, architected, who are considered experts in their field, and who have maintained it for the last decade or longer over someone who has not.

After all - the Linux team, who presumably also consists of pretty smart people, also believed they were right regarding /dev/random versus /dev/urandom, and it turns out it wasn't the case with them either.

I pointed out very specifically why my assertions about Solaris are correct. When you configure the system in a specific way, the implementation for urandom vs. random can produce different results. There are also other subtle differences in the implementations.

Any further details that can be shared will likely be placed in that blog post I linked to when it is updated for a forthcoming Solaris release.


"However, on systems such as Solaris, the cryptography group (whom I know personally) specifically recommends the use of /dev/random in some cases. Those cases are largely limited to specific cryptography requirements such as a hard requirement for high entropy sources."

I'd also like to know what their specific arguments are on Solaris and for what use cases.


Linux is one of the cases where the difference matters a lot, because /dev/random is blocking. I think it's BSD where the two devices are the same.


The difference matters, but on Linux, the difference means that you need to avoid /dev/random, because the blocking behavior virtually never helps security but always reduces reliability.


"because the blocking behavior virtually never helps security but always reduces reliability."

I like how you put that. Best way to because infrastructure reliability is way too important to sacrifice over questionable security gains. Puts a QED on the whole discussion.


Of course you probably already know this, but I think the root problem that OP is driving at here is that if you use /dev/urandom then you risk getting predictable values from /dev/urandom at startup[1], e.g. when initializing your server's SSH keys (or whatever). I seem to recall this being the root cause of thousands upon thousands of home routers being having weak keys. As such, it's important to point out.

[1] That is, before enough external entropy has been gathered.


I think the new `getrandom` interface fixes that, by blocking until /dev/urandom is initialized.


Yes. Yes, it does... but it's not transparent to userspace, which I think is what tptacek was alluding to. (And, frankly, is how it should be. However, the Linux kernel maintainers are absolutely fanatical about backward compatibility for userspace, so here we are with a new syscall.)


If your cryptography requirements require high entropy for the random numbers generated, then blocking is the right answer.

In practice, that's rare, but some consumers have strict, hard requirements that can't be ignored.


Once your CSPRNG has enough entropy in its internal state it has got this entropy.

Retrieving output bytes doesn't change anything.

The view "take three bytes output, lose three bytes entropy" is simply not correct.


That doesn't match the repeated assertions of many posters here. I can only speak for Solaris knowledge-wise.


Here is a rough diagram which demonstrates it http://www.2uo.de/myths-about-urandom/structure-yes.png (from the detailed page http://www.2uo.de/myths-about-urandom/). /dev/random will block when the entropy estimate in low, but in a practical sense, the output of /dev/urandom and /dev/random are both random.


"That doesn't match the repeated assertions of many posters here" - I'm curious, what doesn't match?


Better to block than spit out low entropy data.


That's true the way you say it, and that's definitely the problem that `getrandom()` solves by blocking if the random pool isn't initialized yet. But a lot of people take that to mean "you should never get more than N random bytes from a pool that has N bytes of entropy", and that part is wrong.

All of crypto relies on being able to generate an arbitrary amount of good random bytes from a single 256-or-whatever-byte seed. Otherwise it wouldn't be safe to encrypt a long message with a short key.


libgcrypt (GPG) does the same thing OpenSSL did. It uses /dev/random to seed an internal PRNG. But you can switch to using just the system PRNG:

    gcry_control(GCRYCTL_SET_PREFERRED_RNG_TYPE, GCRY_RNG_TYPE_SYSTEM);
I don't know if they would be convinced to use /dev/urandom though. They already rank /dev/random as having lower "security" than their own PRNG.


It comes from all the posts by smart people and even the man page saying to use /dev/random if security matters. Without digging into the two's code, I also took their word for it the few times I used it in my apps many years ago. Mostly just used CRNG's while dealing with entropy myself, though, so it was limited to a few designs & countering blocking. (Wasted effort in retrospect...) It took quite a while before someone said, "Hey, they go through the same CRNG." I'm like, WTF? Why have so many sources that should be knowledgeable about this been giving bad advice?

Anyway, a number of write-ups (esp Huhn's) and people like Ptacek have been clearing up the issue well. Meme should weaken over time. Might help to straight-up delete the bad claims from whatever sources people are seeing them, though, where possible. Man page comes to mind if it hasn't been updated.


Thomas Pornin is relentlessly hunting down that meme on stackexchange.

I guess Wikipedia would be the highest-profile source for most people.

The English-speaking one mirrors the Linux man page, the German-speaking Wikipedia goes further and outright advises against /dev/urandom for "high requirements". This used to be even worse, though.


Yeah, the Wikipedia articles are a problem. I'll see what I can do when back in town on my main system.


The problem is immediately after boot before anything has seeded the entropy pool.

Like say when sshd generates host keys!


> The problem is immediately after boot before anything has seeded the entropy pool.

With getrandom() on Linux it will block if you don't set GRND_NONBLOCK and the pool isn't initialized - http://man7.org/linux/man-pages/man2/getrandom.2.html - the desired behaviour.


Which is why the getrandom() syscall is preferred; it draws from the urandom pool, but will block until a quantity of initial entropy has been fed into the pool (ie, at early boot).


Is there any actual reason why /dev/random doesn't do that?


Stubbornness of Linux kernel maintainers. My understanding that your proposed behavior is the behavior on FreeBSD and OS X, and people have proposed bringing Linux in line.


getrandom() has a mode where it acts fully like /dev/random - and it's for some (misguided but very important to some big businesses) security certifications


I'm not sure why this is supposed to be a tricky problem. Seed urandom from /dev/random, just for the side-effect of blocking, before you generate host keys. Never use /dev/random again.


AFAICT to do that reliably for all applications using /dev/urandom, you'd have to insert a step blocking all applications at startup until /dev/urandom was seeded. Even applications that have no need for anything from /dev/{u,}random.

(EDIT: I suppose one might try to replace /dev/urandom with some pipe-like thing running in userspace, but that seems error prone and rather contrary to /dev just being "devices".)

[1] Without just doing it at the kernel level, which the Linux kernel developers seemingly still stubbornly refuse to do.


But you keep explicitly telling people not to do this?


I understand what you are saying - so I hope nobody downvotes you. I've spent hours reading the HN, and Stack Exchange stuff on this - and I think the conclusion is, "/dev/random is really bad, because it blocks, breaking all sorts of programs, and 99.99% of people who use it instead of /dev/urandom aren't getting anything that they couldn't get from /dev/urandom, except the blocking behavior. The one possible exception is on a system in which /dev/urandom hasn't yet been seeded, but there are many ways to fix that, the most straighforward of which is to seed /dev/urandom from /dev/random, and then never look at /dev/random again. Or, better yet, use http://man7.org/linux/man-pages/man2/getrandom.2.html which provides a guarantee of entropy, as well as non-blocking nature once it's been properly seeded. The only time it would ever block would be on boot-up prior to it collecting the very few (approx 256?) bits required to generate effectively endless bytes."


Geht yourself a real OS. In OpenBSD the bootloader takes care oft this. Thats good enough to geht random data right wenn the kernel starts, and by the time init runs it's gathered way more entropie.


Umm, is there something wrong with your keyboard?


But large amounts of OpenSSL could simply be discarded given our more limited scope. All the following were simply never copied into the main BoringSSL: Blowfish, Camllia, CMS, compression, the ENGINE code, IDEA, JPAKE, Kerberos, MD2, MDC2, OCSP, PKCS#7, RC5, RIPE-MD, SEED, SRP, timestamping and Whirlpool. The OpenSSL that we started from has about 468,000 lines of code but, today, even with the things that we've added (including tests) BoringSSL is just 200,000.


Of those, Camellia isn't altogether terrible, and still somewhat important for the box checkers in Japan.


That's one of the most heartening things about BoringSSL: they've ditched a couple things that have credible real-world use cases, because the risk/reward simply wasn't worth it.

If you leave things in to appease the box-checkers, you end up with, well, OpenSSL.


I solidly agree with that in general. Camellia actually has a funny backstory in LibreSSL. It was for many years disabled in OpenBSD because it's patented and there wasn't a completely free license grant. When we went looking for things to delete, it seemed like an obvious choice, but then Theo took another look at the patent situation and decided we could enable it. And so we ended up turning on a feature in the midst of trying to delete them.


"If you leave things in to appease the box-checkers, you end up with, well, OpenSSL."

Whoa, who, let's not be so cruel to other crypto developers. You can appease the box-checkers quite a bit without being an utter, insecure mess like OpenSSL. There's libraries that do while making the box-checking stuff optional.

It would be true had you said: If the only thing you do is leave things in to appease the box-checkers...


OCSP? Surely Chromium and Google servers use this. Edit: Ohhhh, just the protocol; stapling and parsing are fine.


<strike>I don't know if that line means they've removed all OCSP support from Chromium (though I hope they did) but worth noting that AGL isn't an OCSP supporter:

https://www.imperialviolet.org/2014/04/19/revchecking.html

Revocation doesn't really work yet. Is there deployed stapling-required yet?</strike>

See AGL's comment below.


I think it just became a RFC.


200,000 lines of code doesn't sound like a whole lot. Couldn't someone like Google rewrite it in Rust?


I am not like Google, but I'm already doing something like that: https://github.com/briansmith/ring and https://github.com/briansmith/webpki.


I think Rust has to mature some more. As far as I know, it is currently not possible to implement constant-time functions using stable Rust.


> As far as I know, it is currently not possible to implement constant-time functions using stable Rust.

For those wondering, that's because asm! is not stabilised and the stabilisation path for it is unclear[0], there's also been a proposal for an alternative asm![1], Rust otherwise relies on LLVM and Brian Anderson reported[2] having

> been told by LLVM folks that getting LLVM to do constant time code generation is essentially hopeless, and it should just be written in asm.

Nadeko[3] aims to do exactly that, but because of the above only works on nightly.

[0] an RFC was created then retracted[4] for discussions on internals[5]

[1] https://github.com/rust-lang/rfcs/pull/129/files

[2] http://rust-dev.1092773.n5.nabble.com/Rust-crypto-highlights...

[3] https://github.com/klutzy/nadeko

[4] https://github.com/rust-lang/rfcs/issues/1274

[5] https://internals.rust-lang.org/t/stabilization-path-for-asm...


Why do you need asm! for it? .s files exist, and as far as I understand, are easy to link to.


I'm not sure composition would work so you'd have to code every constant-time function entirely in assembler.

And now I wonder why there isn't already such a project, surely the number of necessarily constant-time functions is relatively low? Constant-time equality (of byte sequences), constant time conditional (`a if b else c`), constant-time comparison (<=) and maybe constant-time bytes copy?


That's been the consensus I've seen from the group working on Octavo, a rust crypto library. http://lukasz.niemier.pl/octavo/octavo/index.html

It's a possible future goal of the project to end up API compatible to OpenSSL to do just what the GP was saying. I think the constant time functions are the biggest reason it can't happen yet, though there's probably more I'm just not aware of.


Someone at Google (the author of the blog post, Adam Langley) rewrote a lot of what OpenSSL does in Go: https://golang.org/pkg/crypto/


Reviewing and improving literally every function in OpenSSL one at a time is a truly heroic effort.

A part of me regrets that so much time and skill is being sunk into a swamp like OpenSSL though. Surely by now it'd have been easier for Google to produce a much more modern C++ based SSL toolkit that doesn't have this ridiculous litany of problems to resolve? An OpenSSL API emulation could then have been layered on top. I realise a lot of C/C++ apps rely on the OpenSSL API, but I hope one day the industry finds a way to rip off the sticking plaster.


> An OpenSSL API emulation could then have been layered on top.

Not really. OpenSSL exposes "internal" data structures in its API, leaking e.g. x.509 datastructures through[0]. The only way to expose an OpenSSL API emulation is to be OpenSSL.

That's why the libressl project started libtls[1] (née ReSSL) as a clean-slate abstracted API.

[0] http://www.tedunangst.com/flak/post/goreSSL

[1] http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man3/...


Oh thank goodness someone is making a saner API.


  An OpenSSL API emulation could then have been layered on top.
I feel like a full OpenSSL API emulator (With ABI!) would be about as big as OpenSSL itself.


Not necessarily. A complete rewrite can often introduce more bugs than are solved. There's lots wrong with OpenSSL, but there's also a lot it does correctly.

Refactoring code like this is actually a good way of doing things!


> An OpenSSL API emulation could then have been layered on top.

A major problem with OpenSSL is the API. It's hard to use correctly and encourages bad code. Effort would be much better spent transitioning applications away from that API than trying to emulate it.


I feel like this would actually be a good project for a rust re-write. I like to think of rust as c or c++ except with memory "static type-checking". That seems like a huge incentive.


It's difficult/impossible to write constant time functions in rust in it's current state afaik.


Could you link something relevant? I find it hard to believe - seems you could do it the same way every other language does. For example OR all the bytes XORed between two arrays, then compare to 0.

Maidsafe seems to be implementing it somehow for example: http://maidsafe.net/sodiumoxide/master/maidsafe_sodiumoxide/... (see comments about PartialEq)


I did not realize that; looking things up I can verify your claim. There is an RFC out there for the ability to write constant time functions. That would be great and I hope it makes it in. Thanks for informing me.


Not more difficult than C - you write the crypto functions in asm. You could use a C compiler to handle the ABI but the code isn't really C code.


How much code needs to be constant time? Compared to all the management code and parsing and so on?


How does BoringSSL compare to LibreSSL? (http://www.libressl.org/)

Both projects seem to have a similar goal, that is, throwing the crap out of OpenSSL for security reasons.


BoringSSL caters first and foremost to Google's interests; LibreSSL caters to OpenBSD's interests. Either choice affected the final result in terms of what-can-run-where.


LibreSSL is mostly a drop-in replacement for OpenSSL, while BoringSSL has removed things that some applications will depend on. OSes shouldn't/couldn't replace OpenSSL for BoringSSL (the article says as much) but could replace OpenSSL with LibreSSL (some already have).


BoringSSL seems like the more ambitious project, LibreSSL the more pragmatic.


> For much of the code, lengths were converted from ints to size_ts and functions that returned one, zero or minus one were converted to just returning one or zero.

Doesn't google's C code style require using signed ints for lengths?

https://google-styleguide.googlecode.com/svn/trunk/cppguide....

As it should - it's inappropriate to use unsigned types for length because the overflow and underflow behaviors are defined, since they're not defined to what you want. When you have ubsan around, adding defined behavior you don't need just adds places to hide bugs.


Overflow can happen and we test for it wherever possible (at least that's the hope). By using unsigned lengths, the overflow tests are simple: x + y < x. We also don't have to worry about negative values which, in my experience, are at risk of being forgotten more often than excessively large ones.


In this case, and many others, Google's code style guidelines are controversial...


Forgive my ignorance, but am I right in thinking that removing OCSP won't affect OCSP stapling.


That's correct. OCSP stapling is still in the TLS layer and it returns the OCSP information as an opaque blob.


Great project; I think that BoringSSL is more geared towards server applications. I think the major improvements here are:

- improvements in how they handle locks and effort to avoid locks + use of thread local storage.

- better protocol test suites (adopted from GO SSL implementation)

- very good they threw out support for TLS renegotiation; nobody understands that, in addition it is always a big source of bugs.

i am not sure that a good quality non blocking native random number generator is available on all client platform (other than linux), interesting that they also try to use BoringSSL in chrome (maybe here it would have been better to use the old crypto number generator where you only have to bother about a good random seed values)

( BTW do they have the good /dev/urandom in android ? Here they say the android has just /dev/urandom - no /dev/random at all, so this seems to be general google policy at google.

http://security.stackexchange.com/questions/14669/how-to-sel... )


Now apparently this paper http://accepted.100871.net/jni.pdf ("Android Low Entropy Demystified" by Yu Ding, Zhuo Peng and Chao Zhang, from 2014) says that the android random number generator does not have a lot of entropy, this means it is not very secure (note to self: don't buy any stuff online from an android phone)

Interesting if this insecurity of the phone OS is a feature or a design requirement ...

However for use with Google's servers is all pretty much secure, this is certainly a design requirement.


i thought about it a bit : by putting the random number generator into /dev/urandom they can claim plausible deniability if the random numbers can be guessed (and you end up with a compromised session key). However that might also be unintended, not on purpose.


That's forward progress. How did OpenSSL ever get to be so complicated, anyway? They still have 200,000 lines of code. It doesn't do that much.


It helps if you understand that OpenSSL is itself a fork of an earlier SSL library, SSLeay, which Eric Young has said was a project he used to learn C with; SSLeay goes all the way back to the days when you needed to pay for a special license for Apache-SSL. It's very, very old code.


That's not correct, it was to learn bignum arithmetic not C.



It actually started as a wrapper around Young's much earlier DES implementation, written as a drop-in replacement for Solaris's DES tools which were unavailable outside the US because of export restrictions.

I know this because I've dealt with a company that distributes files in Solaris-compatible DES format from an unencrypted FTP server and helpfully supplies Young's libdes to interface with it.

Well, at least it isn't EBCDIC...


It does a lot of things it really shouldn't be doing. I recommend you watch Bob Beck's talk about LibreSSL[1] (Ted's there too).

[1]: https://www.youtube.com/watch?v=GnBbhXBDmwU


The discussion of Big Endian amd64 support at 38:50 in that video is hilarious and scary.


That whole discussion on the slide from 38:50 is hilarious and scary.

"You can't turn off the debugging malloc but you can turn off sockets"

"If the size of socklen_t changes while your program is running, OpenSSL will cope"


Big Endian AMD64? Cisco support? iOS compiled big endian to little endian? WTF!? Lmao crazy on so many levels...

EDIT to add:

Epic observation: "The good news is: if the size of socklen_t changes while your program is running, then... (other guy) OpenSSL will cope." Lol.


If I remembered correctly Cisco was using PowerPC and it was big endian. I do believe that might be a factor in the madness.


Probably thought it would be easier to write a converter than figure out how to change their code without breaking it. That's my guess. Wouldn't have even been a problem if Intel/AMD had chosen the best endianness. ;)


I love how OpenBSD is consistent about using Comic Sans in their material.

They clearly haven't lost their sense of humour despite working with this terrible material. Lesser men would turn cynical over half the things they have to endure :)


Did they take out those "debug" environment variables which turned on key logging and crypto bypass?


What are you talking about?


Set SSLKEYLOG to a filename, invoke something that uses OpenSSL, and take a look at the file contents. [1]

[1] https://isc.sans.edu/forums/diary/Psst+Your+Browser+Knows+Al...


So this is yet another openssl fork, similar to libressl?


They literally have a section called "'Forking'" in the article just to address what BoringSSL is in relation to OpenSSL.


I read through it and was wondering how it's different from libressl, maybe they can join forces to give us another option that is more modern and clean than openssl itself, but then boringssl looks more like a version for google's own needs while libressl is more for the general purposes.


got quite some downvotes here, just wondering why, I read the blog and asked a general question, what's wrong for that...anyways it's getting boring.


I think people disliked the comment because the question sounded dismissive of the project rather than inquisitive — primarily due to the "yet another," I think. Sort of like how "Who invited you?" is technically a question, but usually functions more like a statement that somebody's presence is unwelcome.


non native speaker here and that explains why. thanks for replying


why is everyone pretending that LibreSSL doesn't exist? Why fork something known to be broken, rather than strip down LibreSSL further? Does this idea make sense to anyone?


BoringSSL has been pretty helpful to LibreSSL too, as a source of code and ideas.


BoringSSL started prior to LibreSSL.


"is now powering Chromium (on nearly all platforms), Android M and Google's production services"

Does it means you can exploit more easily Google's backend since the code is opensource?


OpenSSL as you can guess is also open source. Having boringssl opensourced does not change anything.


Google hadn't been using OpenSSL for most of their services; in fact, they discovered Heartbleed when studying the possible transition to OpenSSL from NSS; they ended up transitioning directly to BoringSSL.

This doesn't detract from your point, since NSS is also open source, it's just a factoid.


Who knows that google was using OpenSSL?


The security of a system should never rely on an attacker not knowing the details of the algorithm or implementation.

https://en.wikipedia.org/wiki/Security_through_obscurity


It is possible to fingerprint a TLS stack using its behaviour that a sysadmin can't change. People knew it was openssl.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: