Hacker News new | past | comments | ask | show | jobs | submit login

It's not out of the question, just use your mind. If your financial upfuckery is implemented as a wide array (say 1000+) instances of a Go program, each of which is capable of saying "stop sending me traffic, I'm about to stop for a full GC" a few milliseconds before actually stopping for GC, then latency will not be impacted and throughput will only be degraded by the time required to cork and uncork the input with respect to the time required for the GC round.



A technique along similar lines is to run your upfuckery implementation on multiple servers and have each of them publish the results. Then you can take the output of the first server, and discard the rest. This can be effective at weeding out a majority of small pauses.


That sounds like a possible solution but a bit overcomplicated. Might as well stick to something like C++ than risk additional complexity in my opinion.


Sure. You will still have uncertainty around memory allocation timing, even in C++. For example tcmalloc or jemalloc may need to take a global lock in order to satisfy a heap allocation if thread-local spans are exhausted.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: