Hacker News new | past | comments | ask | show | jobs | submit login

I don’t get it. How is this advantageous as it’s limited to one machine? Why wouldn’t you just have one jvm running multiple threads? What is the point of having multiple jvm processes interacting through this ring? Can someone enlighten me?



A few potential reasons for this design coming to mind:

- Resource allocation; you might want to give just specific amount of memory, CPU, network I/O to specific modules of a system, which is not really feasible within a single JVM

- Resource isolation; e.g. a memory leak in one module of the system will affect just that specific JVM instance but not others (similar to why browsers run tabs in multiple processes);

- Upgrades; you can put a new version of one module of the system into place without impacting the others; while the JVM does support this via dynamic classloading (as e.g. used in OSGi or Layrry, https://github.com/moditect/layrry), this becomes complex quickly, you can create classloader leaks, etc.

- Security; You might have (3rd-party) modules you want to keep isolated from the memory, data, config, etc. of other modules; in particular with the removal of the security manager, OS-enforced process isolation is the way to


Also software design. You can split jvm into those that have to follow strict parameters (eg no allocations) and those that follow more traditional Java patterns.


Yeah, I think the HFT guys use CPU pinning a lot: 1 process - 1 CPU, so you'd need multiple processes to take advantage of multicores server.


Usually it is 1 thread - 1 CPU. There might be other reasons (address space separation has its own advantages - and disadvantages) to have distinct processes of course.


JVM does garbage collection, this can stop all threads at safepoints while GC occurs.

Those stops can be enough to ruin your low latency requirements in the high percentiles. A common strategy is to divide workloads between jvms so that you meet the requirement.


CoralRing and CoralQueue (available on GitHub) are completely garbage-free. You can send billions of messages without ever creating garbage for the GC, so no GC overhead. This is paramount for real-time ultra-low-latency systems developed in Java. You can read more about it here => https://www.coralblocks.com/index.php/java-development-witho...


Interesting. But how do you ensure a worker that picks up a task does not pause on gc as well?


CoralRing does not produce garbage, but it cannot control what other parts of your application choose to do. It will hand to your application a message without producing any garbage, now if you go ahead and produce garbage yourself then there is nothing CoralRing can do about that. Ultra-low-latency applications in Java are designed so that nothing in the critical path produces garbage.


Maybe they run a small heap with a zero-pause JVM like Zing, as pause-less GC generally has lower throughput than normal GC.


Java doesn't have real pause-less GC.


Well, "pause too short to matter" just doesn't have the same ring to it.


One millisecond is not a short pause.


Modern low-latency GCs never reach 1ms on all the workloads I've put them through. Mind you I don't GC terabytes of RAM so who knows what happens there.


I can throw some guesses: 1) apps deployed in separate Docker containers due to organization's tech team separation, 2) apps that require security/performance isolation among tenants, 3) isolation layer around memory-leaky and bug-prone third-party library code.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: