top | item 46615411

(no title)

1a527dd5 | 1 month ago

1. Don't use bash, use a scripting language that is more CI friendly. I strongly prefer pwsh.

2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.

3. Having standalone scripts will allow you to develop/modify and test locally without having to get caught in a loop of hell.

4. Design your entire CI pipeline for easier debugging, put that print state in, echo out the version of whatever. You don't need it _now_, but your future self will thank you when you do it need it.

5. Consider using third party runners that have better debugging capabilities

discuss

order

Storment33|1 month ago

I would disagree with 1. if you need anything more than shell that starts to become a smell to me. The build/testing process etc should be simple enough to not need anything more.

embedding-shape|1 month ago

That's literally point #2, but I had the same reaction as you when I first read point #1 :)

dijit|1 month ago

I mean, at some point you are bash calling some other language anyway.

I'm a huge fan of "train as you fight", whatever build tools you have locally should be what's used in CI.

If your CI can do things that you can't do locally: that is a problem.

zelphirkalt|1 month ago

I don't agree with (1), but agree with (2). I recommend just putting a Makefile in the repo and have that have CI targets, which you can then easily call from CI via a simple `make ci-test` or similar. And don't make the Makefiles overcomplicated.

Of course, if you use something else as a task runner, that works as well.

Wilder7977|1 month ago

For certain things, makefiles are great options. For others though they are a nightmare. From a security perspective, especially if you are trying to reach SLSA level 2+, you want all the build execution to be isolated and executed in a trusted, attestable and disposable environment, following predefined steps. Having makefiles (or scripts) with logical steps within them, makes it much, much harder to have properly attested outputs.

Using makefiles mixes execution contexts between the CI pipeline and the code within the repository (that ends up containing the logic for the build), instead of using - centrally stored - external workflows that contains all the business logic for the build steps (e.g., compiler options, docker build steps etc.).

For example, how can you attest in the CI that your code is tested if the workflow only contains "make test"? You need to double check at runtime what the makefile did, but the makefile might have been modified by that time, so you need to build a chain of trust etc. Instead, in a standardized workflow, you just need to establish the ground truth (e.g., tools are installed and are at this path), and the execution cannot be modified by in-repo resources.

reactordev|1 month ago

Makefile or scripts/do_thing either way this is correct. CI workflows should only do 1 thing each step. That one thing should be a command. What that command does is up to you in the Makefile or scripts. This keeps workflows/actions readable and mostly reusable.

pydry|1 month ago

>I don't agree with (1)

Neither do most people, probably but it's kinda neat how they suggested fix for github actions' ploy to maintain vendor lock-in is to swap it with a language invented by that very same vendor.

kstrauser|1 month ago

I was once hired to manage a build farm. All of the build jobs were huge pipelines of Jenkins plugins that did various things in various orders. It was a freaking nightmare. Never again. Since then, every CI setup I’ve touched is a wrapper around “make build” or similar, with all the smarts living in Git next to the code it was building. I’ll die on this hill.

TeeMassive|1 month ago

Pathological organizations for some reason all prefer to never version the CI in the same repository it is testing.

jayd16|1 month ago

#2 is not a slam dunk because the CI system loses insight into your build process if you just use one big script.

Does anyone have a way to mark script sections as separate build steps with defined artifacts? Would be nice to just have scripts with something like.

    BeginStep("Step Name") 
    ... 
    EndStep("Step Name", artifacts)
They could noop on local runs but be reflected in the github/gitlab as separate steps/stages and allow resumes and retries and such. As it stands there's no way to really have CI/CD run the exact same scripts locally and get all the insights and functionality.

I haven't seen anything like that but it would be nice to know.

arwhatever|1 month ago

Do you (or does anyone) see possible value in a CI tool that just launches your script directly?

It seems like if you

> 2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.

then you’re basically working against or despite the CI tool, and at that point maybe someone should build a better or more suitable CI tool.

zelphirkalt|1 month ago

Can we have a CI tool, that simply takes a Makefile as input? Perhaps takes all targets, that start with "ci" or something.

never_inline|1 month ago

Build a CLI in python or whatever which does the same thing as CI, every CI stage should just call its subcommands.

Storment33|1 month ago

Just use a task runner(Make, Just, Taskfile) this is what they were designed for.

ufo|1 month ago

How do you handle persistent state in your actions?

For my actions, the part that takes the longest to run is installing all the dependencies from scratch. I'd like to speed that up but I could never figure it out. All the options I could find for caching deps sounded so complicated.

embedding-shape|1 month ago

> How do you handle persistent state in your actions?

You shouldn't. Besides caching that is.

> All the options I could find for caching deps sounded so complicated.

In reality, it's fairly simple, as long as you leverage content-hashing. First, take your lock file, compute the sha256sum. Then check if the cache has an artifact with that hash as the ID. If it's found, download and extract, those are your dependencies. If not, you run the installation of the dependencies, then archive the results, with the ID set to the hash.

It really isn't more to it. I'm sure there are helpers/sub-actions/whatever Microsoft calls it, for doing all of this with 1-3 lines or something.

philipp-gayret|1 month ago

Depends on the build toolchain but usually you'd hash the dependency file and that hash is your cache key for a folder in which you keep your dependencies. You can also make a Docker image containing all your dependencies but usually downloading and spinning that up will take as long as installing the dependencies.

For caching you use GitHubs own cache action.

1a527dd5|1 month ago

You don't.

For things like installing deps, you can use GitHub Actions or several third party runners have their own caching capabilities that are more mature than what GHA offers.

latentsea|1 month ago

We use Nuke for this purpose and I really like it. We're a .NET shop, and Nuke is C#, so works pretty well. Almost all our Nuke targets are shared between local and CI except some docker stuff that needs caching in CI.

tracker1|1 month ago

Minor variance on #1, I've come to use Deno typescripts for anything more complex than what can be easily done in bash or powershell. While I recognize that pwsh can do a LOT in the box, I absolutely hate the ergonomics and a lot of the interactions are awkward for people not used to it, while IMO more developers will be more closely aligned to TypeScript/JavaScript.

Not to mention, Deno can run TS directly and can reference repository/http modules directly without a separate install step, which is useful for shell scripting beyond what pwsh can do. ex: pulling a dbms client and interacting directly for testing, setup or configuration.

For the above reasons, I'll also use Deno for e2e testing over other languages that may be used for the actual project/library/app.

newsoftheday|1 month ago

> Don't use bash

What? Bash is the best scripting language available for CI flows.

linuxftw|1 month ago

1. Just no. Unless you are some sort of Windows shop.

jayd16|1 month ago

Pwsh scripts are portable across mac, linux and windows with arguably less headache than bash. Its actually really nice. You should try it.

If you don't like it, you can get bash to work on windows anyway.

rerdavies|1 month ago

If you're building for Windows, then bash is "just no", so it's either cmd/.bat, or pwsh/.ps. <shrugs>

latentsea|1 month ago

I've never been a fan of PowerShell. I work across Windows and Linux on .NET and Nuke has become my go to tool for this.

NSPG911|1 month ago

you should try it, powershell isnt just 'type insanely long phrases', there are aliases for it