Many application web servers have low default TCP connection timeout settings (5 seconds or less), which is a good value if the server is connected to the clients directly and requests are handled very quickly, for example [NodeJS](https://nodejs.org/api/http.html#http_server_keepalivetimeout) or [Uvicorn](https://www.uvicorn.org/settings/#timeouts)/FastAPI. However, if you are running your web server behind a load balancer (AWS ELB/ALB, for example), it makes sense to [bump up the web server's TCP connection timeout](https://docs.gunicorn.org/en/stable/settings.html#keepalive) significantly above the load balancer's timeout. As load balancers typically have much higher timeouts, the actual [client will perceive 502 timeouts without clear reasons on the server side](https://adamcrowder.net/posts/node-express-api-and-aws-alb-502/) when the load balancer tries to (re-) use connections precisely at the same time as the service decides to drop the connection. For example, the AWS load balancers use a [default idle timeout of 60 seconds](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#load-balancer-attributes) - well above the typical web server's default. [Google Cloud](https://cloud.google.com/load-balancing/docs/https#timeouts_and_retries) and [Azure](https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout?tabs=tcp-reset-idle-portal) load balancers behave similarly. In an ideal case, you could configure your server to never time out connections behind a load balancer and instead [rely on the load balancer to shut down the connection](https://docs.oracle.com/en-us/iaas/Content/Balance/Reference/connectionreuse.htm#KeepAliveSettings). But most application web servers don't seem to allow you to disable their TCP timeout "counter of doom."