The Squid codebase had a custom malloc in it for a while for some operating systems, but on systems with competent malloc it used the OS implementation. I doubt that malloc remains today (I believe Squid has been somewhat rewritten in C++, with a more modern approach to memory allocation, since I was working on it).
There was a time about 15 years ago, when many UNIX variants had sub-optimal malloc, or other core libraries, and custom implementations were common in large Open Source projects (as well as commercial products). Those operating systems are mostly gone (Solaris remains, kinda, and there's still some big iron running those old beasts), but I haven't seen a newish project using a custom malloc or other core library in years. I couldn't name the projects that made that choice back then, but I remember seeing it very frequently, when I used to read a lot more C source than I do, today.
According to https://www.webkit.org/quality/leakhunting.html, it may even have multiple ones: "Get a fresh WebKit build, ideally a Debug build as this turns off the various custom allocators in WebKit."
It wouldn't surprise me to find custom allocations in many other large projects that may be memory constrained (postgres, open office, gcc, clang, you name it) at some size, the extra burden of a custom allocator isn't that large relative to the size of a project.
Yes, exactly this. Many large programs have custom allocators for performance and most of them are simple primitives built on top of chunked up pages returned by mmap/VirtualAlloc. The examples of this are all over. This obsession with shaming the OpenSSL developers for wrapping malloc is approaching unhealthy levels.
Im not sure why this misunderstanding of the code continues to persist. It is not a custom malloc. It is a custom API for reusing chunks of a specific size previously returned by malloc.
Wow, OpenSSL smells so bad, I don't even really trust LibreSSL, at least not for a few years. This is no fault of the great devs trying to clean it up, but more a reflection of how I feel about anything security essential being written in C with a day 1 focus on security.
I personally will stay on the path of using the Go built-in SSL library. Sure there are theoretical issues, but no real ones, I trust that real ones will be handled promptly as they appear and everything is well reviewed and unit tested (and of course is written in a memory safe language). I realize this isn’t an option for the non-gophers, but for gophers... yeah. Maybe when Go (hopefully soon) can be compiled to shared libraries this will be an option for non-Go projects.
These guys are doing heroic work, kudos to them. This is not garbage I would want to have to take out.
I'm a little surprised at the down votes as I don't think anything I said was factually incorrect.. are people offended that I think writing something like an SSL library in C is a bad idea perhaps?
I'm a professional systems programmer, I understand C has been the lingua franca and why.
If today I was creating OpenSSL from scratch there are other options. If I need it NOW I could use D. If I was willing to wait a few months or contribute Go shared library support I could use Go. If I was willing to wait until Rust was mature to release and deal with its churn in development I could use Rust.
I guess I'm just suffering from sour grapes over the fact HN continues to suffer more and more of down-voting for sake of disagreement rather than for factual inaccuracy.
Go or D is a very large set of dependencies for a security-focused library and hence unlikely to be suitable for a project like OpenSSL that is designed for use everywhere and by everyone in every language. Security code that needs to be audited should be as minimal as possible.
Go and D are also not yet available for every platform and would force a set of design constraints on consumers that would likely decrease its attractiveness as a solution.
Again, C remains the best choice given the goals I mentioned. It's also why I said OS systems programming and not just systems programming.
Go between the golang.org compiler and gccgo, Go runs on a pretty vast array of architectures. I think for a security related project, I would be happy to best serve the 99% and leave some minor architectures to find another solution if that meant being able to implement a SSL/TLS library in a fast memory safe language. After, one of the root problems of OpenSSL was it tried to be all things to all people.
As for Go's large dependencies are you talking about the Go runtime as build time dependency, because Go is notable for its pronounced lack of runtime dependencies?
In any case, I think you and I both have an understanding of the issues at hand, I just think we would be willing to make different trade-offs.
You're right; I don't think your tradeoffs are as reasonable as you believe, and I don't think they're tradeoffs that most would be willing to make.
The reality is that those "minor" platforms you refer to are ones where a lot of big iron and expensive systems are. While Go / LLVM will eventually be supported on them, Go is still a very large dependency, and is currently not suitable for OS systems programming.
If you did choose to write a new security library in Go, and supposing that the shared library issue was addressed, you'd still have to severely constrain the design of the library to ensure maximum interoperability.
I'm just not sure it's worth the compromise.
Rust, if it was ready, and its runtime dependencies were also better supported, would be far more appropriate than Go.
A very good point; I'm considering using LLVM for some projects, but I'm only interested in 32 and 64 bit x86 and ARM targets.
What big iron targets would you say are also required? Here are some possibilities I can think of: SPARC, IBM's z/Architecture (what about ESA/390 or earlier?) and POWER (the latter of which historically includes more than one ISA), Intel's IA-64 (Itanium, used by HP).
Which doesn't mean production level quality, of course.
More critically, just because LLVM has back end support of an architecture, doesn't mean any particular language using LLVM will support it, or at least as I recall, the language has to know and use details about the back ends it's targeting.
LLVM is currently not suitable for SPARC at all; it's existing SPARC backend was, to put it kindly, not written by someone intimately familiar with the architecture. There are problems on other architectures as well. ARM and x86 are really the best maintained (unsurprising).
Regardless, one of LLVM's biggest problems internally is the number of unaligned accesses, which hurts its performance on many architectures (except x86 of course, where the penalty is very minor). That's not specific to SPARC, but it does hurt performance there quite a bit as well as cause other problems.
Rust unfortunately has a dependency on LLVM, which is not yet available for some hardware and OS platforms. I realize that's improving, but it does make Rust adoption slower for systems programming where languages that only require a standard C or C++ compiler thrive.
Also, while I think it will likely prove a great alternative (especially for OS level systems programming), it's clearly early days and the language is still undergoing significant churn.
> and not make it take an age to open in a browser
The whole page is ~484kb, which in this day is not much at all (many pages serve more css than that). I do like to complain about big pages as well, but I can't say this is one of those.
But I agree, a pdf would be sweet. My biggest complain with the site is that it doesn't work great on a mobile device (very small links, can't press on a slide to go to the next, and Firefox mobile open the page in "full size" mode with only half a slide visible without zooming back).
1: http://arstechnica.com/information-technology/2014/04/tech-g...