(no title)
napkin | 1 year ago
Try changing Linux's default congestion control (net.ipv4.tcp_congestion_control) on your Jellyfin & reverse proxy servers to 'bbr'. I don't understand the details- there might be negative consequences [1]- there might be better congestion algos- but for me, this completely solved the issue. Before, connections would stall out to <10%, sometimes even 1% line rate. In quiet/optimal network conditions.
Also, Caddy enables HTTP/3 by default. I force it to HTTP/2.
I should probably investigate using later versions of bbr, though.
cchance|1 year ago
overbytecode|1 year ago
napkin|1 year ago
I guess what started leading me down the right path was a more methodical approach to benchmarking different legs of the route with iperf: Client <-> reverse proxy, reverse proxy <-> jellyfin server. I started testing those legs separately, w/ and w/o Wireguard, both TCP and UDP. The results showed that the problem exhibited at the host level (nothing to do with Jellyfin or the reverse proxy), only for high latency TCP. The discrepencies between TCP and UDP were weird enough that I started researching Linux sysctl networking tuneables.
There might be something smart to say about the general challenges of achieving stable high throughput over high-latency TCP connections, but I don't have the knowledge to articulate it.