Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been working on a project of mine which makes web requests via libCURL, and the number of memory allocations OpenSSL makes for a single TLS connection is astonishing - running it through `heaptrack` was a real eye-opener.

Discussion I found about other people mentioning it:

https://github.com/openssl/openssl/discussions/26659

In some cases (not all) for my workflows perf record traces show the allocation / deallocation overhead is quite significant, especially in a multi-threaded setup, where contention against the system allocator starts to become a problem in some situations.




Absolutely. Sometimes when using OpenSSL in performance tests, you notice that performances vary significantly just by switching to a different memory allocator, which is totally scary.

I hadn't seen the conversation above, thanks for the pointer. It's surrealistic. I don't see how having to support multiple file formats requires to invest so many allocations. In the worst case you open the file (1 malloc and occasionally a few realloc) and you try to parse it into a struct using a few different decoders. I hope they're not allocating one byte at a time when reading a file...


I’ve noticed a similar issue in a different crypto library before (mbedTLS), IIRC their MPI implementation allocated and deallocated _a lot_ of tiny allocations during ECC operations.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: