(no title)
marcosnils | 2 years ago
The purpose of Dagger it's not to replace your entire CI (Gitlab in your case). As you can see from our website (https://dagger.io/engine), it works and integrates with all the current CI providers. Where Dagger really shines is to help you and your teams move all the artisanal scripts encoded in YAML into actual code and run them in containers through a fluent SDK which can be written in your language of choice. This unlocks a lot of benefits which are detailed in our docs (https://docs.dagger.io/).
> Dagger has one very big downside IMO: It does not have native integration with Gitlab, so you end up having to use Docker-in-Docker and just running dagger as a job in your pipeline.
Dagger doesn't depend on Docker. We're just conveniently using Docker (and other container runtimes) as it's generally available pretty much everywhere by default as a way to bootstrap the Dagger Engine. You can read more about the Dagger architecture here: https://github.com/dagger/dagger/blob/main/core/docs/d7yxc-o...
As you can see from our docs (https://docs.dagger.io/759201/gitlab-google-cloud/#step-5-cr...), we're leveraging the *default* Gitlab CI `docker` service to bootstrap the engine. There's no `docker-in-docker` happening there.
> It clumps all your previously separated steps into a single step in the Gitlab pipeline.
It's not generally how we recommend to start, we should definitely improve our docs to reflect that. You can organize your dagger pipelines in multiple functions and call them in separate Gitlab jobs as you're currently doing. For example, you can do the following:
```.gitlab-ci.yml
build:
script:
# no funky untestable shellscripts here, but calls to \*real\* code
- dagger run go run ci/main.go build
test:
script:
# no funky untestable shellscripts here, but calls to \*real\* code
- dagger run go run ci/main.go test
```
This way, if your pipeline currently has a `build` and `test` job, you still keep using the same structure.> but is very annoying if you use Gitlab CI's built in parsing of junit/coverage/... files, since you now have extra layers of context to dig trough when tests fail etc
You can also still keep using these. The only thing you need to be aware is to export the required test / coverage output files from your Dagger pipelines so Gitlab can use them to do what it needs.
> but I've just written quick-and-dirty scripts to do that every time I've needed it.
This is what we're trying to improve. Those quick-and-dirty generally start very simple but they become brittle and very difficult to test by other engineers. Yes, you could use docker or any container-like thing to enable portability, but you'll probably have to write more scripts to glue all that together.
Quoting one of our founders:
"Our mission is to help your teams to keep the CI configuration as light and "dumb" as possible, by moving as much logic as possible into portable scripts. This minimizes "push and pray", where any pipeline change requires committing, pushing, and waiting for the proprietary CI black box to give you a green or red light. Ideally those scripts use containers for maximum reproduceability. Our goal at Dagger is to help democratize this way of creating pipelines, and making it a standard, so that an actual software ecosystem can appear, where devops engineers can actually reuse each other's code, the same way application developers can."
No comments yet.