Hacker News new | past | comments | ask | show | jobs | submit login

> It has a completely opposite pattern: the more you load it the faster it goes.

Not sure what do you mean with that. The article is about the general theoretical properties of (unbounded) queues.

LMAX is a bounded queue, with the quirk that it will drop older messages in favour of newer ones and it assumes that either there is a side channel for recovery or consumer(s) can tolerate message drops.




You're right, that was a rough take. My presentation of LMAX was to simply provide a different perspective over this space.

What I was trying to get at is the specific memory access patterns, batching effects and other nuances related to the physical construction of the CPU which modulate the actual performance of these things. I think this quote better summarizes what I was trying to convey:

> When consumers are waiting on an advancing cursor sequence in the ring buffer an interesting opportunity arises that is not possible with queues. If the consumer finds the ring buffer cursor has advanced a number of steps since it last checked it can process up to that sequence without getting involved in the concurrency mechanisms. This results in the lagging consumer quickly regaining pace with the producers when the producers burst ahead thus balancing the system. This type of batching increases throughput while reducing and smoothing latency at the same time. Based on our observations, this effect results in a close to constant time for latency regardless of load, up until the memory sub-system is saturated, and then the profile is linear following Little’s Law. This is very different to the “J” curve effect on latency we have observed with queues as load increases.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: