top | item 27649342

Avoiding Complexity with Systemd

291 points| irl_ | 4 years ago |mgdm.net

347 comments

order

Some comments were deferred for faster rendering.

WesolyKubeczek|4 years ago

For me, systemd is the best thing since sliced bread.

As a programmer, I now don't need to care about dropping privileges, managing logging, daemonization (the moment I need to do the double-fork dance again, chairs will be flying, I swear), dropping into a chroot, and do half-arsed guesses "is it up and running yet?" from a convoluted mess of shell code that looks like a bunch of hair stuck down a drain for a month.

I just write an easy-to-debug program which I can launch from command line and see it run, and when I'm satisfied, a systemd unit from a cookie-cutter template is going to make it run. Service dependencies are now a breeze, too.

If I need to limit resources, I can just declare the limits in the unit. If I want a custom networking/mount namespace, it's taken care of.

kevincox|4 years ago

I agree. It puts the system administrator in control of a lot of these things. Sometimes that can be annoying (the developer knows that systems calls they need) but often it is a huge benefit. I think socket-passing especially is a huge win, it shifts a huge amount of complexity out of each application and gives huge benefits as the administrator can configure the sockets however they want without needing each application to support each feature independently. Furthermore it removes one of the most common reasons why applications need to be started as root.

I wrote about this previously here: https://kevincox.ca/2021/04/15/my-ideal-service/#socket-pass...

bcrl|4 years ago

It's great... until something unexpected happens. Like your NIC doesn't have link on the ethernet cable and systemd waits for minutes without allowing you to abort waiting on the network because other units depend on it.

Or how about adding new buggy DNS code that doesn't work in common scenarios? Oh, sorry, here's another CVE because we didn't create enough test cases for the corner cases that are actually important.

Or oops, "nobody uses ntsysv", right? Or "We don't need to implement chkconfig even though we broke it".

systemd is a monolithic beast that is absorbing everything else in the system without considering that some of its decisions should be able to be disabled, and a lot of the design decisions are half baked. I don't believe in the philosophy of design its author has embraced. Progress is good, but please, stop breaking shit that has worked for decades. Anyone can write new code that partially implements a feature, but it takes real effort to responsibly migrate users from tools that worked to your new shiny half assed kitchen sink.

zh3|4 years ago

Wait until it automagically fails.

It may be great for less-skilled people, but for anyone running anything where it's too critical to outsource support it's then necessary to have a systemd expert inhouse (and such a person has proven extremely hard to find).

spinax|4 years ago

As a systems guy with a focus more on ops, I agree. It's not all roses - journald/journalctl and binary logging can go die in a pit of fire for example - however setting LimitNOFile= in a unit is just really, really nice (as well as CPU limits and all sorts of other cgroup/namespace needs). But let me just mention again that journald/journalctl can go die in a pit of fire - if it wasn't for everyone adding rsyslog to create regular text files we would be a world of hurt. But in return we get almost complete, painless cgroup level handling right in the unit file with a simple key=value structure (so really, you don't have to know anything at all about cgroups or namespaces to be very effective). Most options have a doc for them - it's usually easier for me to find an obscure systemd setting with a nice blurb about what it does (in non-programmer speak) than it is to dig up an obscure sysctl setting, e.g.

I can/could do without timesyncd and resolved (it's easy - just use chrony e.g.) but I like udevd being now part of systemd. It would be nice to not write /etc/udev/rules.d/ and instead have a foo.udev unit type, perhaps that is in our future (we do have .device units, but it's not the same - yet? the future). In this ballpark I think it's more on each distro picking and choosing - Ubuntu for example drank the kool-aide much deeper than RHEL - RHEL for example uses chrony out of the box, not timesyncd. However udevd and logind seem to be common across all distros now, as another user commented the KillBackground=yes setting in logind is just horrible to have as a default. The whole "homed" thing makes me sad that it's even being coded, I hope nobody adopts that (I dislike it for the same reason I dislike automount); someone out there wants it though.

The ability to dynamically edit a unit (systemctl edit) and to dynamically alter the running service constraints (systemctl set-property), all PID file type needs are handled in /run (getting rid of the nasty SysV stale unexpected crash reboot pid problem which many scripts failed to handle properly). Users having the ability to use their own private init items (systemctl --user) is great - timers, socket activation, custom login units, all very well extended down into the user's control to leverage. I'm sort of 50/50 on cron vs. timers, that's more of a use case by use case decision (example: tossing a https://healthchecks.io "&& curl ..." is just a lot quicker and easier in cron, but running a dyndns script on my laptop with a timer is nicer).

Touching on systemctl edit, it's really easy now to show folks (think a DBA team who only has the fundamental ops skill) how to quickly chain their After= and Before= needs for start/stop of their (whatever) without having to go down a rabbit hole - it's simple to use, the words and design are accessible and familiar, the method by which it works is a little obtuse (it's rooted in understanding the "dot-d" sub-include design pattern). On RHEL at least it uses nano as the default editor, annoying to me but good for casual non-vim users and easy enough to override using $EDITOR.

I used SysVinit for all the same years as everyone else (Solaris to Debian to Red Hat, ops touches it all) and wrote many my fair share of complex init units to start DB2, Oracle, java appservers (anyone remember ATG Dynamo?); systemd handles natively what 75% of that work was/is (managing PID files, watching/restarting failures, implementing namespaces/cgroups, handling dependency chains, etc.); for those complex scenarios (looking at you, Tomcat) you can still just have a unit launch a very complex shellscript "like in the old days". I haven't looked in awhile, but last time I knew in RHEL7, Red Hat did exactly that with Tomcat - just had the systemd unit launch a script.

It is, however, a real bear to debug sometimes - it's far easier to "bash -x /etc/init.d/..." and figure out what in the world is going wrong than it is to debug systemd unit failures. But, the same holds true for trying to debug DBus (if you've never tried / had to, it's not fun at all without deep dbus knowledge). I would like to see the future add more ops-oriented debugging methodology - if you've every used "pcs" (the commandline tooling for Pacemaker offered by RHEL), we could really use "systemctl debug-start" type of interfaces to the commandline offer the same experience as "bash -x" days of old. There are debug settings, they're just not ergonomically dialed in for the ops user, IMHO - systemctl debug-start would save people a lot of headaches.

strictfp|4 years ago

That's true, but you can get all that without taking over the entire system. Upstart was a more lightweigh contender to systemd which would give you all that but none of the "Enterprise Linux Userspace Daemon" crap.

einpoklum|4 years ago

1. This can, and should, all be done without systemd, and not with idiosyncratic shell scripts and guesses.

2. Some of the systemd logging is binary, so good luck with that if there's a problem.

3. Have you tried non-systemd init systems other than sysvinit?

4. Yes, it is convenient when everything below your development is cenrtalized by a single entity. It can easily provide a consistently useful underpinning. But there's a price - overly strong coupling of the init system, the kernel and part of the user-space, centralized control of, well, almost all of how things work on the system, and stagnation of the ecosystem due to there being only one game in town.

tsujp|4 years ago

The choice isn't systemd or roll-your-own-init-system. There are alternatives like runit, openrc etc.

You can do the same painless setup (arguably even easier) with runit as the base template requirement is literally just

    #!/bin/sh
    the_executeable &
Granted, logs in runit are optional but there are problems with default logging too, e.g. Docker will keep filling logs until the disk is full unless you explicitly tell it not to in either its configuration or your own custom log rotation rules. Neither of which are default.

Dinux|4 years ago

I'm glad to see more people sway to systemd. Systemd is 10 years in the making and it was met with skepticism right from the first day. Some of that is now slowly changing with systemd being accepted in more and more distributions. Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

I remember when it took multiple days testing the configuration on different distributions, editions and versions to just get a single daemon process to start without failure. Then do the whole thing over again because the Debian based distros did not use the RedHat startup tools, different (or no) runlevel configurations, different service binders, different NTP daemon, different terminal deamon, etc.. And of course the French customers wanted the service to work with Mandriva, the German customers want SUSE support with extended security options like dropping permissions after boot.

Just like the article mentions you can define a portable service model with security and failure handling built in. There wasn't even anything that came close back in the day. Systemd may not have been built with the Unix philosophy in mind, but at some point that becomes a secondary concern.

Systemd unifies all systemd resources in units which work anywhere, its expandable and extendable, user friendly, allows for remote monitoring etc.

midasuni|4 years ago

For people who had well working low maintenence environments, systemd came in and changed everything - breaking things, requiring changes to get things working again.

Its not just breaking init.d scripts, it’s ntp, dns, syslog. Systems throughout the OS fail to things that were no longer short commands with muscle memeory, there were now ridiculous convoluted commands like systemd-resolve --status instead of 30 years of typing cat /etc/resolv.conf

Even when you remember and type that in, you don’t get a simple list of nameserver and host, you get 100 lines of text you have to spend effort parsing to work out what’s going on.

When it’s less mental effort to run tcpdump port 53 to see where your DNS is going, there’s a problem.

For decades it was /etc/init.d/myserice restart

Now is it systemctl restart myservice or systemctl myservice restart? I have no idea as I’m not at a computer.

Or the restart fails it doesn’t tell you why, it gives you two locations to look for log files about why it might have broken. Init.d scripts didn’t do that. Even if there was something really wrong that log files don’t reveal, running init.d with bash -x allowed easy debugging

Systemd came in and changed working processes and systems and gave very little benefit to people with working processes and systems from a operator point of view.

GekkePrutser|4 years ago

I think it's also because those people that didn't want systemd have just moved on. I moved my servers to alpine and my desktop to FreeBSD. It's just not a thing in my thoughts anymore. I wouldn't write about it. So it seems the Linux community is more aligned now.

However alpine is working on a similar thing based on s6 but with modularity and lightweightness as design goals. This sounds great to me. I'm not against the idea of a service manager, but I think systemd is overreaching.

zozbot234|4 years ago

There's a lot of poorly-understood incidental complexity in the systemd codebase, and this can bite users even when doing basic service and runlevel management. The systemd approach is to try and make it 100% declarative based on simple .ini files, but the semantics of this seemingly "declarative" configuration was never properly specified. Even many systemd fans seem to be quite aware of this, and there seems to be a common understanding that some ground-up reimplementation of these ideas based on a clearer underlying "philosophy" will be needed at some point. Systemd has been a successful experiment in many ways, but relying on throwaway experimental code for one's basic production needs is not a good idea.

knorker|4 years ago

I don't think it's that people are warming to systemd. It's more that there are two kinds of people now:

1. People to young to remember stable software. 2. People who have given up, and just accepted that Linux too "just needs a reboot everynow and then to kinda fix whatever got broken".

systemd has normalized the instability of shitty system software. And just like how you don't see front page news every day about 1.3M traffic deaths per year because it's not news, you don't see people up in arms about shitty Linux system software.

It's normal now. It didn't use to be.

Yes, ALSA is better than OSS, and then PulseAudio and now pipewire. It can do more. But when did it become acceptable to get shit, just because the shit could do more things?

Pipewire is not bug free (I have a bug that's preventing me from an important use case), but it's sure more reliable than PulseAudio, while still being more capable.

So maybe Pipewire is showing a trend towards coders actually giving a shit?

egberts1|4 years ago

I’ve since moved away from systemd for all my Linux boxes, work and home.

We still cannot block systemd from making a network socket connection so security model is shot right there by the virtue of systemd running as a root process.

In the old days of systemd, no network sockets were made.

Systemd has become a veritable octopus.

Now, I use openrc and am evaluating S6.

dagw|4 years ago

Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

They might not have been better or more robust, but they where easier to understand and reason about. You could explain the entire thing to the most junior of sysadmins in a few minuets, tell them to read the boot scripts, and they would basically understand how everything worked.

throw0101a|4 years ago

> Service and runlevel management wasn't any better in the sysv era, nor were any of the multitude of custom start and boot scripts.

Things would have been fine for a lot of people if they had stopped at SysV script replacing, and general start up.

At this point, with all the additional functionality continuously being added, I'm waiting for systemd to fulfil Zawinski's Law:

> Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

* http://www.catb.org/jargon/html/Z/Zawinskis-Law.html

* https://en.wikipedia.org/wiki/Jamie_Zawinski

baybal2|4 years ago

> Systemd is 10 years in making

Systemd is 10 years in making, and still manages to brick production servers.

The problem is not with SystemD or its coding as such, but the ideology it came with, and bad developers who push it.

The last attempts to make it saner basically reverted it back to sysvinit. So, not much difference now.

tyingq|4 years ago

My frustration with systemd is that it creeps out over time into other functionality, and that it's difficult to find the right documentation.

Like when logind was changed to kill background processes when you log out, by making KillUserProcesses=yes the default. Some Linux distros left that as is, others overrode it in /etc/systemd/logind.conf. So, figuring out what was happening, and how to fix it, was confusing. I had no idea it would have been logind doing that.

Similar for changes introduced with systemd-resolved.

josephcsible|4 years ago

In particular, "background processes" includes screen/tmux sessions that you started. This change completely broke the largest reason that people use screen or tmux.

iso1210|4 years ago

Ouch, what on earth does "log out" even mean? What a terrible way to run any computer other than maybe a single user laptop.

yawaramin|4 years ago

Wow, are people seriously still fussing about this? Systemd made the call to use a more secure default, they should be applauded for it. People who want the insecure way to be the default should take it up with their distro.

knorker|4 years ago

My problem with systemd is that it's just so poorly written.

The ideas are not inherently bad. But they're not thought through, and the implementation is pure garbage.

Like taking the most stable software in the world[1], and going "nah, I'll just replace it with proof-of-concept code, leaving a TODO for error handling. It'll be fine.".

And then the "awesomeness" of placing configuration files whereever the fuck you want. Like /lib. Yes, "lib" sounds like "this is where you place settings file in Unix. At least there's no other central place to put settings".

[1] Yes, slight hyperbole. But I've not had Linux init crash since the mid-90s, pre-systemd

stefanha|4 years ago

My understanding is that default unit files provided by the systemd packages are in /usr/lib where they can be read-only, whereas users can add/override them by dropping their own unit files into /etc (which is more likely to be read-write).

This provides a clean separation between the default configuration and the user configuration.

Can you explain why this is a bad thing?

A counter-example that comes to mind is when a package upgrade requires manual intervention due to file conflicts in /etc. That's what happens when the packager's default configuration interferes with the user's custom configuration.

throw-8462682|4 years ago

Care to elaborate? Off the bat, your comment comes across as a cynical rant due to its high use of strong words (garbage, fuck) and lack of examples. And even if you have anecdotes, to be convincing, it would have compare something like bug density to the software projects that collectively replaces. As written, your statement is unlikely to convince anyone that isn’t yet already.

stefanha|4 years ago

Recently I have been wondering if systemd solves problems that are becoming less and less relevant to developers. New services are often deployed as containers.

While systemd has a bunch of container-related functionality, it does not integrate well into the Kubernetes or even Docker workflow. It's used very little in those environments.

If you are building CoreOS or NixOS system images, or traditional Linux system services, then systemd matters. But I think way more services are being built for the container world where these problems are solved differently.

For example, the TLS configuration can be handled with common container patterns. The author's startup example would translate more easily to a full-blown Kubernetes environment once the VC funding hits their bank account if they had used containers from the start instead of first writing the service for systemd.

It's a shame because systemd is very powerful and I've enjoyed using it.

danny_sf45|4 years ago

As a developer I prefer using systemd instead of containers to deploy Golang applications.

Without (Docker) containers it is:

- build Go binary and install it in production server

- write and enable the systemd unit file

With (Docker) containers it is:

- write Dockerfile

- install Docker in production server

- build Docker image and deploy container in production server

I get the appealing of containers when one production server is used for multiple applications (e.g., you have a Golang app and a redis cache), but the example above I think containers a bit of an overkill.

pjmlp|4 years ago

Those of us using Java, such problems were already kind of irrelevant in 2005.

Where you deploy your EAR/WAR file doesn't matter, so the application container can be running on Windows, any UNIX flavour or even bare metal, what matters is there is a JVM available in some way.

Also on the big boys UNIX club (Aix, HP-UX, Solaris,...) systemd like alternatives have been adopted before there was such an outcry on GNU/Linux world.

On cloud platforms if you are using a managed language, this now goes beyond what Java allowed.

You couple your application with a bunch of deployment configuration scripts, and it is done, regardless of how it gets executed in the end.

The cloud is my OS.

Nextgrid|4 years ago

Containers might be popular in startups' "pay five figures a month to $CLOUD_PROVIDER" scene when VCs rain infinite free money, but there are still plenty of occurrences where you have to deal with old-school physical machines where it's often easier to just run the software on the bare-metal rather than using Docker and yet another layer of abstraction.

zxzax|4 years ago

You can just use podman to run Docker containers. That workflow is honestly what I wanted years ago when I first used docker, where containerization is put in the core system, and you can progressively add containerization to your core services while also running a full container on top of the same runtime.

einpoklum|4 years ago

> New services are often deployed as containers.

That's another problem to be solved.

candiddevmike|4 years ago

Only complaint I have about systemd is their docs aren't versioned, so it's difficult to see if the functionality you're looking for is available in the version you're running. Most of the time I have to Ctrl F through the NEWS doc in their repo.

turminal|4 years ago

The title is an oxymoron.

Putting a huge complex piece of software between yourself and "complexity" doesn't make the system less complex.

smitty1e|4 years ago

Systemd seems to do a good job of moving the complexity of managing the privileged/unprivileged divide into a standardized service.

I sympathize with the "transition sucks" sentiments elsewhere on this post. Having a bunch of working scripts turned into instant technical debt cannot be pleasant.

But, as with python3, systemd seems to be the way things are headed.

GuidoW|4 years ago

Agreed.

One point is that processes other than root cannot start services on ports < 1024. That was a sensible precaution computers where big and multiuser, like in a university setting.

However, with single-serving services (e.g. in vm/container/vps/cloud), there is no need for it.

BSD lets you configure it with a sysctl option. But Linux defends that option like it is still 1990.

On NixOS, I patch it like this:

   boot.kernelPatches = [ { name = "no-reserved-ports";  patch = path/to/no-reserved-ports.patch; } ];
With the patch just as big:

  --- a/include/net/sock.h
  +++ b/include/net/sock.h
  @@ -1331,7 +1331,7 @@
  #define SOCK_DESTROY_TIME (10*HZ)

  /* Sockets 0-1023 can't be bound to unless you ares uperuser */
  -#define PROT_SOCK      1024
  +#define PROT_SOCK      24

  #define SHUTDOWN_MASK  3
  #define RCV_SHUTDOWN   1

whateveracct|4 years ago

Use NixOS and you'll love systemd.

You'll be defining your own systemd units with ease.

systemd to you will be journalctl and systemctl. So pretty good.

atoav|4 years ago

I use mostly Ubuntu and Debian and defining systemd units just means you have to spit the right text into a .service file placed at the right spot.

How does NixOS make that easier?

iso1210|4 years ago

If systemd was just a replacement for starting scripts that would be one thing.

when you have to run "systemctl disable systemd-timesyncd systemd-resolved systemd-networkd" to start to get back to sanity, that's not init

folex|4 years ago

I just learned about S6 recently. It simplified daemon supervision for me a lot.

cpach|4 years ago

s6 is very cool. Just out of curiosity, have you found any specific use-cases for it yet?

tbrock|4 years ago

This should be part of the manual. Ive tried multiple times to understand systemd deeper than a service restart here and looking at logs for a unit there to no avail.

proactivesvcs|4 years ago

I've found systemd to be quite well documented. Here's a few of the resources I frequent:

Systemd docs: https://www.freedesktop.org/software/systemd/man/index.html

List of directives: https://www.freedesktop.org/software/systemd/man/systemd.dir...

Unit-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.uni...

Service-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.ser...

Timer-specific configuration: https://www.freedesktop.org/software/systemd/man/systemd.tim...

kkirsche|4 years ago

As someone who wants to learn systemd, even just to understand what I’m stuck with and why it was made, this is the best introduction I’ve found. Does anyone have other resources to help beginners learn how to use systemd efficiently and it’s relationship to Docker?

djhworld|4 years ago

Really enjoyed reading this article.

The LoadCredentials thing reminds me of configmaps in K8S, is there a more general thing in systemd e.g LoadConfig

mhitza|4 years ago

Disclaimer, no idea what LoadConfig does.

A more generic approach than LoadCredentials I think is the EnvironmentFile= directive if you want to pass along multiple env variables to your process without individual Environment= directives

CGamesPlay|4 years ago

In the TLS example, how does systemd handle certificate rotation? Are the certificate files symlinks, hard links, or copies? If hard links or copies, presumably my service will need to be signaled or restarted to get the new certificate; does systemd do this?

kemotep|4 years ago

Whenever these discussions of systemd come up, I am reminded of this talk[0].

It will be interesting to see if one day a replacement for systemd comes along and people who once championed systemd will begin to use the arguments the people who do not prefer systemd use to defend their choices for not wanting to use the next init system manager.

[0]: https://youtu.be/o_AIw9bGogo

einpoklum|4 years ago

> see if one day a replacement for systemd comes along

Part of the critique of systemd is the basic architectural choice of having this monolithic layer between regular user apps and the kernel. So, in a sense, the idea is _not_ to replace systemd with a better-written systemd, but to do things differently.

sgt|4 years ago

If you are looking to run your containers in a very light weight way and also easily understood, you can use systemd for this and use tips from this article. You will have to do the orchestration yourself so I think it would have to be more suitable for very simple deployments with small teams and / or part time projects.

leotaku|4 years ago

I've used systemd containers to run badly behaved GUI apps, e.g. Steam and MS Teams on my personal computers. Surprisingly good experience, the systemd manual pages are extremely comprehensive and, in my opinion, much easier to understand than people make them out to be.

PaulHoule|4 years ago

I don’t see it as a systemd-ism but that the UNIX way is maturing.

Passing an arbitrary fd or socket from one process to another solves many problems and we are in the habit of doing it now.

Dah00n|4 years ago

I read that as meaning "avoid systemd's complexity" as opposed to "avoid complexity by doing so and so using systemd". Oh well.

robertlagrant|4 years ago

To;Dr: good lord it's hard to avoid complexity when you're not using Docker to run your app.

teknopaul|4 years ago

Trying to avoid complexity by being dependent on something horribly complex is not going to work.

ketzu|4 years ago

> Trying to avoid complexity by being dependent on something horribly complex is not going to work.

It is probably unavoidable, looking at how complex modern compilers, processors and kernels are. They sure do make a lot of things simpler, though.

bitwize|4 years ago

You incur complexity with systemd. You avoid complexity with runit.

diegocg|4 years ago

runit isn't comparable with systemd. It isn't even trying to be a powerful init system.

You will incur in lots of complexity trying to deal with init systems that aren't much better than traditional init.

cpach|4 years ago

IMO runit is abandonware at this point. No release since 2014.

Have you looked at s6? It’s a compelling alternative.

baybal2|4 years ago

I want to mention connman.

Does everything you expect of single programmlet to manage your NTP, resolve.conf, DNS caching, mDNS, network devices, and etc.

Importantly, it weights only 1/100 of SystemD

SahAssar|4 years ago

Seems like that is mostly for the network-related stuff and not services, mounts, isolation/namespacing/containers, init, logging and so on. Aren't you comparing a basket of apples to an apple-slice?

drran|4 years ago

How to kick out SystemD first, to try connman?

einpoklum|4 years ago

The title is quite triggering. As I see it, systemd itself is very complex. So, one might say "Guaranteeing complexity with systemd", though possibly being able to ignore the complexity, usually.

imiric|4 years ago

Apologies, I was also triggered by the title. :)

> systemd provides ways to restrict the parts of the filesystem the service can see.

So like chroot and namespaces? Why do I have to depend on systemd when these are native features provided by Linux?

So systemd provides a friendlier abstraction of these concepts. Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.

Having your application actually use systemd libraries further increases this dependency and makes it no longer usable but on a subset of Linux machines. This would be fine for some controlled production deployment, but is awful for usability and adoption.

viraptor|4 years ago

> So like chroot and namespaces? Why do I have to depend on systemd when these are native features provided by Linux?

Not like namespaces - using namespaces. And for the same reason we use other high level abstractions and high level languages rather than handcrafted assembly. You don't have to depend on it either - you can still use chroot instead of you want, but it's more work that way.

> Great, but so do Docker and Podman and many other tools that can actually be installed without taking over the rest of the system.

Docker installs a service which takes over lifecycle management, restarts, and traffic proxying for apps. It injects and managed multiple firewall chains. It pretty much takes over network management. And it's still stuck on the old cgroups format so it forces that on your system. It really doesn't win this comparison.

> Having your application actually use systemd libraries

You don't need them. Everything from the post is defined in simple environment variables. For example socket activation is maybe 3 extra lines when done from scratch.

TruthWillHurt|4 years ago

I'd suggest you stear away from using systemd and a server to launch your startup.

While this is a good writeup, and you end up with a service, you still need to manage a machine with all risks involved - server reboots, updates, networking etc.

AWS Fargate, or the new App Runner will manage a container almost hassle-free