Tue Oct 15 07:06:06 EDT 2019
avoid using timeouts
This has always bugged me. It is not possible to properly pick
timeouts in all but the simplest cases. Either they are too short --
you forgot that one case that sometimes can take a while, but is
perfectly ok -- or they are too long such that effective error-out
happens way too slowly causing a large mitigation delay.
Is there a way to somehow compose the generation of timeouts such that
the numbers get filled in at places that are reasonable knowledgable,
and e.g. add up through composition.
Or better, replace every timeout with a better abstraction: error out
when the other party disappears. This can be done by adding keepalive
to a connection, or something like Erlang monitors.
So don't be lazy and add monitoring and error feedback. Do not just
rely on timeouts as error mechanism unless there is a clear upper
bound, e.g. at the lowest levels where CPU time and network delay are
not abstracted away.