Hacker News new | past | comments | ask | show | jobs | submit login

That isn't due to a missing timeout, that is due to not properly communicating aborted requests down the stack which, admittedly, isn't always easy and some clients/languages/etc. are very bad at. A hardcoded timeout, while a fine workaround in some applications, is not a good default and not the proper fix for that.

Default timeouts in the database layers are hidden time bombs that turn operations that just legitimately take a bit longer than some value the library author set that you didn't even know existed into failures that get retried over and over causing even more load than just doing the thing once. Don't get me wrong there are lots of uses for setting strict timeouts and being able to do so is very important, but as a default no thanks.




You sometimes won't know a TCP connection has been closed unless you try to write to it (possibly there's a select/epoll/etc way to test), so if you are using blocking I/O, you won't know that the HTTP client went away long ago.


Highly advise to turn on TCP keepalive to detect dropped connections.


Sure. But the pointer of the parent poster was that you still won't observe the error unless you are interacting with the socket again. If you have a blocking thread per request model and your thread is blocked on the database IO, then it won't look at the original request (and it's source socket) for that timeframe.

There is no great OS solution for handling this. You kind of need to run async IO on the lowest layer, and at least still be able to receive the read readiness and associated close/reset notification that you can somehow forward to the application stack (maybe in the form a `CancellationToken`)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: