Hacker News new | past | comments | ask | show | jobs | submit login

The problem is that the async model is a form of cooperative multithreading, so if one computation runs for a long time without returning to the main event loop, it can increase the latency for responses to other events. E.g., if one HTTP request takes a long time to process, and many of the worker-pool OS threads are handling such a request, response time goes up for all the other requests. OS-level concurrency is preemptive (timesliced), so one busy thread doesn't block other requests, but of course with much higher overhead in other ways. Best practice is usually to keep event handlers on event-loop threads short and push heavy computations to other OS-level worker threads.



ah, that make sense, I think another user point out that spawning n goroutine does not actually spawn n physical threads but rather queue n task to m thread in the pool, so if we exhaust m thread, n-m task will be blocked.

Thanks for the explanation

I wonder what is the point where each trade off make sense (ie. what is consider heavy computation vs light computation, it probably is related to OS thread allocation time)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: