(no title)
ElectricSpoon | 2 years ago
And I would guess sysadmins also don't like their logging facilities filling the disks just because a service is stuck in a start loop. There are many reasons to think a service failing to start multiple times in a row won't start. Misconfiguration is probably the most frequent reason for that.
twic|2 years ago
I'm sure there are exceptions to this. For those, set Restart=always. But it's an absolutely terrible default.
BenjiWiebe|2 years ago
growse|2 years ago
Starting up, noticing that the environment doesn't have what you need yet and dying quickly appears to be The Kubernetes Way. A scheduler will eventually restart you and you'll have another go. Repeat until everything is up.
The kubelet operates the same way afair. On a node that hasn't joined a cluster yet, it sits in a fail/restart loop until it's provisioned.
deathanatos|2 years ago
(You can guess how we noticed the problem…)
Also logrotate. (And bounded on size.)
freedomben|2 years ago
melolife|2 years ago
Before=systemd-user-sessions.service
This means that as long as systemd is trying to (re)start the service, nobody can log in. Which is a problem with infinite restarts.
It's still pretty easy to accidentally set up an infinite restart loop with the default settings if your service takes more than 2s to crash.