top | item 39985249

My deployment platform is a shell script

130 points| j3s | 1 year ago |j3s.sh | reply

138 comments

order
[+] anonzzzies|1 year ago|reply
I use similar things for bigger (multi-server) deploys too. It's light and it just works and works for decades without changes/updates. People say it's brittle; I have a proof of n>0 that this is not the case compared to many other solutions, this post making that point too. Sh/bash/perl(8) have been around forever, they don't break after update etc.

I sadly don't recommend it for my day job, simply because of liability. When something messes up with ansible, terraform, docker, cloudformation etc, no-one gets any blame because 'complex systems', 'it happens' etc etc; with a simple script going wrong, they would hang me high even though it probably saves a crap load in maintenance, compute etc over the 10s of years. Same reason we use clusters and IaC while of course nothing we do needs it; if an aws cluster goes down, no-one but aws gets blamed, while if the $2 postgres vps@cheapafhosting (with an higher uptime than that aws cluster by the way; human error downed it a few times, short, but still) is down even for a ping, everyone is upset and pointing fingers.

[+] throwaway458864|1 year ago|reply
Shell scripts are a more evolved form of programming and nobody can change my mind on that. They require less work, they're easier to make, they're flexible, compatible, composable, portable, small, interpreted, and simple. You can do more with few characters and do complex things without the complexity of types, data structures, locks, scoping, etc. You don't write complex programs in it, but you use complex programs with it, in ways that would be over complicated, buggy and time consuming in a traditional language.

That said, it's a tool. Like any tool, it depends how you use it. People who aren't trained on the tool, or don't read the instruction manual, might get injured. I'd like to see a version of it that is safer and retains its utility without getting more complicated, but it would end up less useful in many cases. Maybe that's fine; maybe it needs to be split into multiple tools.

[+] alanbernstein|1 year ago|reply
I agree with most of this, my biggest issue is how hard it is for me to recall any moderately complex shell syntax (or the slightly different Makefile syntax). LLMs largely solve that for me.
[+] twic|1 year ago|reply
I have written a lot of bash. When you know what you're doing it's very productive. But it still feels like walking a tightrope, where some corner case in quoting, interpolation, comparison, etc will one day rm -rf / you.

What i really want is "python, but with really easy running of subcommands". Imagine extending python with a $ operator (prefix, applied to iterables) so that

  files_iter = $('ls')
would run ls and put an iterator over its lines of output in that variable, throwing an exception if ls exits with an error status (i realise there is a rabbithole of subtleties here - getting those right would be part of this). Or

  contains_pattern = $?('grep', '-q', pattern, file)
to get just the exit status as a boolean. I think i'd drop bash in a heartbeat.
[+] chasil|1 year ago|reply
The use of ls in this way is not good form:

  cd /root
  for project in $(ls go-cicd);
I think a better expression would be:

  for project in ./*
  do [ -d "$project" ] || continue
     ...
[+] chasil|1 year ago|reply
Some other nit-picks about this script.

- Everything here is done as root. For the day that you want to build with lesser privilege, do this:

  BLDUSER=~root
- That will be difficult for you, because you are moving the projects to /usr/local/bin; for the day that you stop running as root, make a subdirectory "/usr/local/bin/$BLDUSER" (owned by the namesake account) and move the projects there instead.

- Very minor nitpick, use <<- and prefix tabs on the here document to make it slightly easier to read.

- Slight improvement, so this can print more than one argument:

  println() { y=
    for x
    do printf %s%s "$y" "$x"
       y=' '
    done >> /root/gocicd.log
    echo >> /root/gocicd.log # for the newline
  }
[+] pevey|1 year ago|reply
You can also use webhooks to deploy with each GitHub push. The advantage over GitHub actions is you don’t have to store any secrets on GitHub or with integrators like Vercel. Just send a payload to your own endpoint each time a commit is made, and that can trigger your shell script to rebuild and deploy. Using symbolic links helps make it more robust to errors. Trigger a pull of the repo, and build. Only if the build is successful, move the symbolic link of your production app to the new build. This also allows keeping some history of builds in case you ever need to troubleshoot.
[+] Ingon|1 year ago|reply
I also started with a simple shell script. Upload sources, build on the target system (golang) and restart the systemd service(es).

Then, I needed to make another machine like this, so enter Ansible. This worked well for a long time, and was relatively content with it. Along the way, I leaned about nix (and enough of it) to adopt a simple flake to pull out my tools (like golang, ansible, terraform) through. For a long time, I used it like this (e.g. still ansible, but I started building locally)

Finally, I learned enough nix to adopt NixOS. Now, I've converted my project to a nix package and a NixOS module, which allows me to totally describe the state of the machine I want. With this, remote builds and colmena (mostly for pushing secrets), I deploy a complete system, including my own software.

[+] 1vuio0pswjnm7|1 year ago|reply
"i like things that work for years with as little interaction from me as possible."

Shell scripts written in NetBSD sh/Debian ash will work for as long as I live.

[+] z_zetetic_z|1 year ago|reply
Or, you could use NixOS and just declare your systems in some text files, git commit; git push.

You build script becomes:

   while true; do

   git pull

   nixos-rebuild switch

   sleep x

   done
That's it. You can even do it remotely and push the new desired state to remote machines (and still build on the target machine, no cross compile required).

I've completely removed Ansible as a result and no more python version mismatches, no more hunting endless task yaml syntax, no more "my god ansible is slow" deplyments.

[+] chasil|1 year ago|reply
Instead of saying:

  while true
You can instead say:

  while :
There is actually a /bin/true, which could involve the fork of a new process for each iteration of the loop. The form that I have shown you is guaranteed not to fork.
[+] snippy|1 year ago|reply
Sounds interesting. Let's say the software is a web backend. Can you deploy it like this with zero downtime? So that the new version starts, new traffic goes to it, and the old version handles its active requests to completion and then shuts off.
[+] Macha|1 year ago|reply
My current deployment method for most of my personal hosts is:

    nixos-rebuild switch --target-host x.example.com 
(I still have a few Arch hosts using Ansible, but will migrate them in future)
[+] bravetraveler|1 year ago|reply
Not to deride this (too much), but the 'robustness' of deployments with shell scripts is tempting bait. Things are until they aren't, 'nobody rides for free' - decide what you're willing to pay.

Example: this interprets the output of 'ls'. Reliability is dependent on good quoting/never introducing a project with spaces

Ansible is a nice middle ground, personally. I write the state that differs, use a library of scripting.

[+] rkta|1 year ago|reply
Parsing ls is an anti-pattern, but the author says it works for years - we all do mistakes and as long as it works you don't notice.

And it's an easy fix:

    - for project in $(ls go-cicd); do
    + cd go-cicd || exit 1; for project in \*; do
[+] cess11|1 year ago|reply
Putting

  SAVEIFS=$IFS
  IFS=$(echo -en "\n\b")
or something similar at the top of the script might not come across as comparable to adopting Ansible to some people.
[+] Gys|1 year ago|reply
I assume this script runs on the server. I was building Go projects on the server as well, a vps where I have several things running. At some point I noticed that larger builds severely effected the other websites. So now I build locally and push the binary to git. To not bloat the project repo with big binary blobs I use a special deploy repo.
[+] prmoustache|1 year ago|reply
I think the most important part is the last lines:

"consider keeping your little things little.

it worked for little old me."

The rest are details and every one of us would implement the details in a different way.

For example a similar script could be portable with non go projects by looking for a simple build-deploy.sh script that take care of each project deployment mode/instructions.

[+] pfitzsimmons|1 year ago|reply
As a pythonista, I am a huge fan of the plumbum library as a replacement for bash. It makes it very straightforward to run a sequence of *nix commands, but you get all the simplicity and power of the python language in terms of loops and functions and so forth. These days, I do all my server management and deployment scripts with python/plumbum.

And while simple is great, the necessary features not included in OP's scripts is that I want to spin up the new instance in parallel, verify it is running correctly, and then switch nginx or the load balancer to point to the new server. You are less prone to break production and you get zero downtime deploys.

[+] cl3misch|1 year ago|reply
How do you deal with plumbum not being a builtin module? Do you install it system-wide? This currently holds me back from using sh (the Python lib) for maintaining my servers, especially if I need it with root.
[+] codegeek|1 year ago|reply
Love stuff like this. For my personal blog, I have a simple Makefile that builds the Go Binary, generates a static HTML output and then deploys it to a DigitalOcean VPS using ssh, reloads Caddy and Supervisor and boom.
[+] gregsadetsky|1 year ago|reply
There's a lot of good in that script - it's just that it doesn't seem to cover functionalities that I'm used to after years of deploying side and "real" (business, etc.) projects to Heroku and Render.

How do you manage domain names, who deals with the ssl certificates, how do you set environment variables i.e. "secrets", how can you run postgres, how do you run remote commands i.e. dbmigrate.py, etc.

A friend and I have been working for a few months on a project to simplify this - we're not the first to do an open source IaC, but we're scratching our own itch on a lot of features that we've been missing. It's basically "deploy with git push to your own VPS and manage everything with a CLI".

I'd love to ask - what do people feel is mostly lacking from OP's script? Which features seem like the most important when deploying/managing a remote server? How do you choose if you're going to use Ansible or K8S or a script, or a full blown IaC i.e. Heroku? Is it price/ownership (i.e. having full control over the machine)/ease of use/speed of deployment/something else? Thanks!

[+] gigatexal|1 year ago|reply
Off topic: I love the writing style of the author and this blog. Gonna follow it.
[+] kragen|1 year ago|reply
here's the deployment script i use most often

http://canonical.org/~kragen/sw/dev3.git/hooks/post-update

    #!/bin/sh
    set -e

    echo -n 'updating... '
    git update-server-info
    echo 'done. going to dev3'
    cd /home/kragen/public_html/sw/dev3
    echo -n 'pulling... '
    env -u GIT_DIR git pull
    echo -n 'updating... '
    env -u GIT_DIR git update-server-info
    echo 'done.'
dev3.git is the origin for dev3, so the `git pull` in there pulls from the bare repo that just got pushed to

it doesn't have the 60-second lag and it doesn't load the server all the time. it also doesn't run `go build` or restart a server with openrc, but those would be easy things to add if i wanted them

[+] morkalork|1 year ago|reply
I still get a chuckle remembering a co-worker called this the "pull and pray" method.
[+] mnahkies|1 year ago|reply
I have a very similar system[1] for my personal projects, only I use GitHub actions to push a docker image to ECR and a commit to a config repo bumping the tag. I then have a cronjob to pull the config repo and reconcile using docker compose.

I wouldn't use it for serious stuff, but it's been working great for my random personal projects (biggest gap is if something crashes it'll stay crashed until manual intervention currently)

- [1] https://github.com/mnahkies/shoe-string-server/pull/2

[+] tacone|1 year ago|reply
I use this deploy script for my hobby project: https://gist.github.com/tacone/230d5c305a9c5eff7f58ea2744f20...

It will connect over ssh, pull the code, build the containers and restart them (scripts/live is just a wrapper around docker-compose).

If the build fails, the services will keep running.

The only problem I have is that hitting CTRL+C in the very moment the containers are being restarted will leave me with the services down.

[+] zoidb|1 year ago|reply
Not exactly the same configuration as the op, but if you are developing software using Go, the combination of Caddy, a single go binary, and systemd or some other supervisor is extremely flexible and i think is the way for running multiple services on a single VM.

A shell script that deploys a couple config files and off you go. Use different accounts for each service for isolation and put all of your static files in your binary using embed.FS. No need for fancy configuration management or K8s.

[+] anonyfox|1 year ago|reply
I have a similar deploy.sh script for my go projects, with a slight twist:

I compile my Go projects to a binary on a github action, scp it to the server, ssh into it and restart - all done in my deploy.sh and the GHA itself only installs Go and deps (its cached) and then calls that deploy.sh script which sits right in the repo itself.

Super happy with it. Speaking as a previous DevOps guy that got sick of AWS complexities.