top | item 46910061

(no title)

mickeyp | 23 days ago

The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.

I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.

But it starts with one unitary tool for triggering work.

discuss

order

carlsmedstad|23 days ago

This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.

[0]: https://github.com/mkincl/mkincl

zahlman|23 days ago

I think the README would be better with a clearer, up-front explanation of what this builds on top of using `make` directly.

chedabob|23 days ago

Ye, kick off into some higher-level language instead of being at the mercy of your CI provider's plugins.

I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.

krautsauer|23 days ago

Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system. https://www.gnu.org/software/make/manual/html_node/Catalogue...

What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.

mickeyp|23 days ago

No I'm saying use Makefiles, which work just fine. Mark your targets with PHONY and move on.

wtcactus|23 days ago

I agree, but this is kind of an unachievable dream in medium to big projects.

I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.

mickeyp|23 days ago

It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call. Some things just don't run on a local machine: fair. But a lot of things do, even very large things. Things can be scaled down; the same harnesses used for the development environment and your CI environment and your prod environment. You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.

Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.

But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?

zahlman|23 days ago

Sometimes the problem is that the project is bigger than it needs to be.

nottorp|23 days ago

Funny enough, the LLMs are allowed to run builds on your local machine. The humans, not any more.