top | item 40264726

(no title)

psYchotic | 1 year ago

I'm considering moving reverse proxying to Traefik for my self-hosted stuff. Unlike the article's author, I'm running containerized workloads with Docker Compose, and currently using Caddy with the excellent caddy-docker-proxy plugin. What that gets me, currently:

- Reverse proxying, with Docker labels for configuration. New workloads are picked up automatically (but I do need to attach workloads to Caddy's network bridge).

- TLS certificates

- Automatic DNS configuration (using yet another plugin, caddy-dynamicdns), so I don't have to worry too much about losing access to my stuff if my ISP decides to hand me a different IP address (which hasn't happened yet)

There are a few things I'm currently not entirely happy about my setup:

- Any new/restarting workload makes Caddy restart entirely, resulting in loss of access to my stuff (temporarily). Caddy doesn't hand off existing connections to a new instance, unfortunately.

- Using wildcard certs isn't as simple as it could/should be. As I don't want every workload to be advertised to the world through certificate transparency logs, I use wildcard certs, and that means I currently can't use simple Caddy file syntax I otherwise would with a cert per hostname. This is something I know is being worked on in Caddy, but still.

Anyway, I've used Traefik in k8s environments before, and it's been fairly pleasant, so I think I'll give it a go for my personal stuff too!

PS: Don't let this comment discourage you trying Caddy, it's actually really good!

discuss

order

eropple|1 year ago

I use Caddy for single-purpose hosts and the like, but I 100% would throw Traefik at the problems you're describing--and I do, it's my k8s cluster ingest and it runs in my dev environments to enable using `localtest.me` with hostnames.

It's worth kicking the tires on. Both are great at different things.

sureglymop|1 year ago

I use (rootless) docker compose + traefik. Precisely because for wildcard certs it was really painless. Although I use my own DNS server and use RFC2136 DDNS for the LE DNS challenge. No plugins needed, really. I have basically one ansible playbook to set all this up on a vm including templating out the compose files. Then another playbook that can remove everything from the server again (besides data/mounts). For backups I use restic with a custom script that can back up files, different dbs etc to multiple locations.

In the past I deployed k3s but I realized that was too much and too complicated for my self hosted stuff. I just want to deploy things quickly and not have to handle the certs myself.

mynegation|1 year ago

I have not used Caddy, I use traefik and it discovers docker properties for configuration and TLS certificates with auto update. Not sure about dynamic DNS - I do not use it from Traefik. Adding and removing containers does not need a restart AFAIR.

Cyykratahk|1 year ago

I've used caddy-docker-proxy in production and it doesn't cause Caddy to drop connections when loading a new config.

I just tested it locally to check and it works fine.

psYchotic|1 year ago

Hmm, I'll have to take a better look at my setup then, because it's a daily occurrence for me. Either I'm "holding it wrong" (which is admittedly possible, perhaps even likely given the comments here), or I have a ticket to open soon-ish.

remram|1 year ago

Those are giant limitations. This is the first I hear of any reverse proxy that has to restart and drop connections to update configuration. That is usually the first, most fundamental part of any such server's design.

mholt|1 year ago

That is absolutely not the case. Caddy config reloads are graceful and lightweight. I have no idea why this person is stopping their server instead of reloading the config.

IggleSniggle|1 year ago

Caddy doesn't have to restart, I think it's related to the specifics of their setup. The simple/easy path that gets a lot of people into caddy has a workflow that's more like, run caddy, job done. The next level is, give caddy super simple configuration file, reload caddy with "caddy reload --config /etc/caddy/Caddyfile". After that, you use the REST API to make changes to the server while it is running, which uses a JSON configuration definition instead of a Caddyfile, so it ends up being a jump for users.