top | item 44554488

(no title)

seriocomic | 7 months ago

Love this - tried it. The problem as I see it is that these still require hosting - ideally (again, as I see it) self-hosting a script that monitors internal/homelab things also requires its own monitoring.

Short of paying for a service (which somewhat goes against the grain of trying to host all your own stuff), the closest I can come up with is relying on a service outside your network that has access to your network (via a tunnel/vpn).

Given a lot of my own networking set-up (DNS/Domains/Tunnels etc) are already managed via Cloudflare, I'm thinking that using some compute at that layer to provide a monitoring service. Probably something to throw next at my new LLM developer...

discuss

order

hammyhavoc|7 months ago

UptimeFlare looks promising—runs in a Cloudflare Worker: https://github.com/lyc8503/UptimeFlare

If anybody wants to be a clever clogs, combining both this and Uptime Kuma would be genius. What I want is redundancy. E.g., if something can't be reached, check on the other, likewise if one service takes a crap, continue monitoring via the other and sync up the histories once they're both back online.

This "local or cloud" false dichotomy makes no sense to me—a hybrid approach would be brilliant.

If anyone manages this, email me: me@hammyhavoc.com. I would love to hear about it.