My company is currently adopting this and I don't see the appeal yet - likely from a lack of knowing much about it.
I added it to a side project just to get familiar and it added quite a few sdk files and folders to my project, and lots of decorators. It also required Docker and yadda yadda yadda.
I just could not justify using it compared to just running some regular Typescript file with Bun (or, in a different project, `go run cmd/ci/main.go`)
Not sure but my guess is because they aren't a good fit for many languages. If you need a task runner then often languages will have a built in option or there are better alternatives than Make. If you need a build system then Make isn't a good fit for a lot of modern languages.
My struggle with Make and bash is that they're not very expressive - maybe that's something we want in our CIs, but I've always preferred writing an actual program in that program's native language for CI/CD, even if it has to shell out some commands every now and again.
I touched make once in 1999, in school. The syntax was arcane, even by 1999 standards.
> Why have they gone out of style?
Because no modern toolchain uses make. Its syntax is so arcane that it's been replaced with various tools that are designed for the specific stack. Otherwise, more generic build systems use modern languages / markup.
Suppose I give you two functions f, and g. Can you run f(g()) without breaking things? The honest answer is you don't know until you read the functions, which is a slow and difficult thing to do.
Suppose I give you functions f and g of respective types int -> str and Nothing -> str. Can you compose them? No, and you see this immediately from the types. Types make reasoning about composability a lot easier.
Of course, it's not a panacea, and it's less helpful the more side effects a function has. Can we compose pure int->int functions? Of course! Can we compose two of them where the second expects some image to exist in some docker registry? You'll need to read the first to be able to tell.
Given the highly side effectful nature of pipelines, I'd think the applicability of types would be limited. But maybe that's just a lack of imagination on my part.
Certainly information like "this pipeline expects these variables" and "this pipeline sets these variables" are susceptible to a typed approach, and it would make things easier. By how much, I don't know.
I have a hot take on this. I don’t care how you build and deploy as long as it’s reproducible and the whole process can be tracked in their metadata. I’d rather have a process validating CI/CD stages and artifacts metadata in a central db than unifying pipelines that won’t get standardized due communication complexity. This way I can have a conversation on visibility rather than code edge cases.
I had a look at the example glu deployment pipeline and I’m decidedly unimpressed.
Admittedly most of my criticism is related to the choice of Go as an implementation language: more than 80% of the code volume is error handling boilerplate!
Before the lovers of Go start making the usual arguments consider that in a high-level pipeline script every step is expected to fail in novel and interesting ways! This isn’t “normal code” where fallible external I/O interactions are few and far between, so error handling overhead is amortised over many lines of logic! Instead the code becomes all error handling with logic… in there… somewhere. Good luck even spotting it.
Second, I don’t see the benefit of glu (specifically) over established IaC systems such as Pulumi — which is polyglot and allows the use of languages that aren’t mostly repetitive error handling ceremony.
This seems like an internally developed tool that suits the purposes of a single org “thrown over the fence” in the hope that the open source community will contribute to their private tool.
ocamlci is an OCaml Platform offered canned recipe, a la glu, and they really cut that boilerplate down. I almost never use, but i had the same vibes as you did and it made me think of an impl i thought glu may have something to learng from.
> Pipeline definitions are scattered across multiple tools—GitHub Actions, Jenkins, ArgoCD, Kubernetes—and environments. This fragmentation leads to confusion, configuration drift, and duplicated effort.
So are they talking about some sort of meta language compiling into multiple yaml configs for the different environments or a single separate CI tool that has plugins and integrates with GitHub/gitlab/etc?
I do agree with them about the need for a real programming language. I hate yaml in gitlabs config, it is very hard to interpret how it will be interpreted. Things were much easier when I was scripting Jenkins even though I didn't know or like groovy then with gitlab
I don't think this is reasonable if you have a cross platform (web, android, iOS, macOS, Windows, Linux, FreeBSD, ...) app, things won't be that clean, that only works if whatever you do is very simple, otherwise there will be some patchwork to build and test across all platforms - there's just no way to run all of them local in your single platform computer whatever that is. Honestly a lot of what is there is not that useful, I don't need types in the pipeline when scripting python for build integration logic, there's nothing that types brings that are a must in that case.
> The Fix: Use a full modern programming language, with its existing testing frameworks and tooling.
I was reading the article and thinking myself "a lot of this is fixed if the pipeline is just a Python script." And really, if I was to start building a new CI/CD tool today the "user facing" portion would be a Python library that contains helper functions for interfacing with with the larger CI/CD system. Not because I like Python (I'd rather Ruby) but because it is ubiquitous and completely sufficient for describing a CI/CD pipeline.
I'm firmly of the opinion that once we start implementing "the power of real code: loops, conditionals, runtime logic, standard libraries, and more" in YAML then YAML was the wrong choice. I absolutely despise Ansible for the same reason and wish I could still write Chef cookbooks.
I don't think I agree.
I've now seen the 'language' approach in jenkins and the static yaml file approach in gitlab and drone.
A lot of value is to be gained if the whole script can be analysed statically, before execution. E.g. UI Elements can be there and the whole pipeline is visible, before even starting it.
It also serves as a natural sandbox for the "setup" part so we can always know that in a finite (and short) timeline, the script is interpreted and no weird stuff can ever happen.
Of course, there are ways to combine it (e.g. gitlab can generate and then trigger downstream pipelines from within the running CI, but the default is the script. It also has the side effect that pipeline setup can't ever do stuff that cannot be debugged (because it's running _before_ the pipeline)
But I concede that this is not that clear-cut. Both have advantages.
sourishkrout|1 year ago
I believe https://dagger.io checks all these manifesto boxes and more. At least that’s where I’m focusing my attention.
GrumpyCat42|1 year ago
I added it to a side project just to get familiar and it added quite a few sdk files and folders to my project, and lots of decorators. It also required Docker and yadda yadda yadda.
I just could not justify using it compared to just running some regular Typescript file with Bun (or, in a different project, `go run cmd/ci/main.go`)
jas39|1 year ago
ericyd|1 year ago
GrumpyCat42|1 year ago
maccard|1 year ago
All the tools do their own dependency tracking already (unfortunately).
gwbas1c|1 year ago
> Why have they gone out of style?
Because no modern toolchain uses make. Its syntax is so arcane that it's been replaced with various tools that are designed for the specific stack. Otherwise, more generic build systems use modern languages / markup.
paweladamczuk|1 year ago
I would gladly hear this argument expanded. It's really not obvious to me that that's the case.
wesselbindt|1 year ago
Suppose I give you functions f and g of respective types int -> str and Nothing -> str. Can you compose them? No, and you see this immediately from the types. Types make reasoning about composability a lot easier.
Of course, it's not a panacea, and it's less helpful the more side effects a function has. Can we compose pure int->int functions? Of course! Can we compose two of them where the second expects some image to exist in some docker registry? You'll need to read the first to be able to tell.
Given the highly side effectful nature of pipelines, I'd think the applicability of types would be limited. But maybe that's just a lack of imagination on my part.
Certainly information like "this pipeline expects these variables" and "this pipeline sets these variables" are susceptible to a typed approach, and it would make things easier. By how much, I don't know.
mdaniel|1 year ago
And, tellingly, it seems they still haven't provided a "why not ${other tool}" anywhere that I can readily spot
esafak|1 year ago
clvx|1 year ago
ttyprintk|1 year ago
jiggawatts|1 year ago
Admittedly most of my criticism is related to the choice of Go as an implementation language: more than 80% of the code volume is error handling boilerplate!
Before the lovers of Go start making the usual arguments consider that in a high-level pipeline script every step is expected to fail in novel and interesting ways! This isn’t “normal code” where fallible external I/O interactions are few and far between, so error handling overhead is amortised over many lines of logic! Instead the code becomes all error handling with logic… in there… somewhere. Good luck even spotting it.
Second, I don’t see the benefit of glu (specifically) over established IaC systems such as Pulumi — which is polyglot and allows the use of languages that aren’t mostly repetitive error handling ceremony.
This seems like an internally developed tool that suits the purposes of a single org “thrown over the fence” in the hope that the open source community will contribute to their private tool.
cdaringe|1 year ago
https://github.com/ocurrent/ocaml-ci?tab=readme-ov-file
rat87|1 year ago
So are they talking about some sort of meta language compiling into multiple yaml configs for the different environments or a single separate CI tool that has plugins and integrates with GitHub/gitlab/etc?
I do agree with them about the need for a real programming language. I hate yaml in gitlabs config, it is very hard to interpret how it will be interpreted. Things were much easier when I was scripting Jenkins even though I didn't know or like groovy then with gitlab
cdaringe|1 year ago
Said kindly, no, they’re not. They’re just stating values here, imho, not impl detail
a1o|1 year ago
unknown|1 year ago
[deleted]
vergessenmir|1 year ago
I've worked in this space for a long time and can't make head or tail of what glue is.
A motivating examplt would be help which I might have missed?
cdaringe|1 year ago
I look forward to seeing some matrix eval of impl strategies against these values
aarmenaa|1 year ago
> The Fix: Use a full modern programming language, with its existing testing frameworks and tooling.
I was reading the article and thinking myself "a lot of this is fixed if the pipeline is just a Python script." And really, if I was to start building a new CI/CD tool today the "user facing" portion would be a Python library that contains helper functions for interfacing with with the larger CI/CD system. Not because I like Python (I'd rather Ruby) but because it is ubiquitous and completely sufficient for describing a CI/CD pipeline.
I'm firmly of the opinion that once we start implementing "the power of real code: loops, conditionals, runtime logic, standard libraries, and more" in YAML then YAML was the wrong choice. I absolutely despise Ansible for the same reason and wish I could still write Chef cookbooks.
mqus|1 year ago
It also serves as a natural sandbox for the "setup" part so we can always know that in a finite (and short) timeline, the script is interpreted and no weird stuff can ever happen.
Of course, there are ways to combine it (e.g. gitlab can generate and then trigger downstream pipelines from within the running CI, but the default is the script. It also has the side effect that pipeline setup can't ever do stuff that cannot be debugged (because it's running _before_ the pipeline) But I concede that this is not that clear-cut. Both have advantages.
moltar|1 year ago
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
azeirah|1 year ago