Dagger was something I looked into two or so years ago before they got consumed by the LLM and AI agent hype, and while the promise of being able to run the exact CI workflows locally seemed excellent, it seemed that there's basically no way be a Dagger user without buying into their Dagger Cloud product.
I ended up opting for CUE and GitHub Actions, and I'm glad I did as it made everything much, much simpler.
Can you explain/link to why you can't really use this without their cloud product? I'm not seeing anything at a glance, and this looks useful for a project of mine, but I don't want to be trapped by limitations that I only find out about after putting in weeks of work
What I don't get is why would someone code in the terrible GitHub actions dsl which only runs on GitHub actions and nowhere else when there are so many other options that run perfectly fine if you just run it from GitHub actions.
As someone that has used Dagger a lot (a previous daggernaut / ambassador dropped off after LLMs was announced, and was changing jobs at the time. implemented it at a previous company across 95% of services, built the rust sdk) the approach was and is amazing for building complex build chains.
It serves a place where a dockerfile is not enough, and CI workflows are too difficult to debug or reason about.
I do have some current problems with it though:
1. I don't care at all about the LLM agent workflows, I get that it is possible, but the same people that chose dagger for what it was, is not the same audience that runs agents like that. I can't choose dagger currently, because I don't know if they align with my interests as an engineer solving a specific problems for where I work (delivering software, not running agents).
2. I advocated for modules before it was a thing, but I never implemented it. It is too much magic, I want to write code, not a DSL that looks like code, dagger is already special in that regard, to modules takes it a step too far. You can't find the code in their docs anymore, but dagger can be written with just a .go, .py or .rs file. Simply take in dagger as a dependency and build your workflow.
3. Too complex to operate, dagger doesn't have runners currently, and it is difficult to run a setup in production for CI yourself, without running it in the actions themselves, which can be disastrous for build times, as dagger often leads you into using quite a few images, so having a cache is a must.
Dagger needs to choose and execute; not having runners, even when we we're willing to throw money at them was a mistake IMO. Love the tool, the team, the vision but it is too distracted, magical and impatient to pick up at the moment.
Hi Kasper, good to see you here! Thank you for the detailed feedback.
1. Yes we got over-excited with the agent runtime use case. We stand by the LLM implementation because we never compromised on the integrity of Dagger's modular design. But our marketing and product priorities were all over the place. We're going to refocus on the original use case: helping you ship software, and more particularly building & testing it.
2. Modules have warts but they are essential. We will continue to improve them, and remain committed to them. Without this feature, you have to write a complete standalone program every time you want to build or test your software. It's too much overhead.
3. Yes you are right. We really thought we could co-exist with CI runners, and get good performance without reinventing the wheel. But for various reasons that turned out to not be the case. So we're going to ship vertically integrated runners, with excellent scalability and performance. DM me if you want early access :)
TLDR: yes we needed to choose and execute. We have, and we are.
I thought Dagger had/has a lot of potential to be "AWS-CDK for CI pipelines".
I.e. declaratively setup a web of CI / deployment tasks, based on docker, with a code-first DSL, instead of the morass of copy-pasted (and yes orbs) CircleCI yaml files we have strewn about our internals repos.
But their DSL for defining your pipelines is ... golang? Like who would pick golang as "a friendly language for setting up configs".
The underlying tech is technically language-agnostic, just as aws-cdk's is (you can share cdk constructs across TypeScript/Python), but it's rooted in golang as the originating/first-class language, so imo will never hit aws-cdk levels of ergonomics.
That technical nit aside, I love the idea; ran a few examples of it a year or so ago and was really impressed with the speed; just couldn't wrap my around "how can I make this look like cdk".
I was interested in the beginning for CI/CD, but then they tried to take a kind of "AI-oriented" view in order to ride the AI wave, and the value prop of their tool was completely muddied up...
Hi, I'm the founder of Dagger. I can't speak to the negativity, but if you're looking for a way to make your CI more portable, I recommend joining our Discord and asking our community directly about the pros and cons of using Dagger. Even if you don't end up using it, there are a lot of people there who are passionate about CI and can recommend other alternatives, in a more constructive and pragmatic way than you are getting here.
I used the old CUE-based version when it came out, and was really excited about it. I liked it, and enjoyed working with CUE, but the API was clunky and incomplete.
Then they completely abandoned not just the CUE frontend, but CUE altogether (while strenuously denying that they were doing so) for a GraphQL-based rewrite that focused on letting people use popular general-purpose languages to construct their workflows. The initial rollout of this was not feature complete and only supported imperative languages (Python and TypeScript, IIRC), which I didn't like.
Instead of porting everything over to all their new interfaces, I hopped off the train and rewrote all of our portable pipeline scripts in Nix, via Devenv. At the time, I'd never used Devenv before, but getting the work done that time still took maybe a tenth of the time or less. More than anything else, this was due to not having to fuck around with the additional overhead Docker entails (fussing with mount points, passing files from one stage to another, rebuilding images, setting up VMs... all of it). I got the reproducibility without the extra bullshit, and got to work with interfaces that have proven much more stable.
I still think there's a place for something like Dagger, focused just on CI, perhaps even still using Docker as a deployment/distribution strategy. But I no longer trust Dagger to execute on that. I think a proper external DSL (probably special-purposw but still Turing-complete, e.g., Nickel) is the right fit for this domain, and that it should support multiple means of achieving repeatability rather than just Docker (e.g., Nix on bare metal and Podman, to start). An option to work on bare metal via reproducible environment management tools like Nix, Guix, or Spack is a valuable alternative to burdensome approaches like containers.
I haven't looked at Dagger in several months, but the other big piece that is missing for portable CI workflows is a library that abstracts over popular CI platforms so you can easily configure pull/merge request pipelines without worrying about the implementation details like what environment variables each platform exposes to indicate source and target branch.
Idk anything about all the AI horseshit; I was off the Dagger bandwagon before they took that turn. I don't know if it's serious or a nominal play to court investors. But that kind of pivot is another reason not to build core infra on top of the work of startups imo. If the product is 70% of what you want, you have no way of knowing whether filling that 30% gap is something the maintainers will suddenly pivot away from, even if their current direction looks aligned with yours.
I'd recommend considering tools in this space only if (a) they're already close to 100% of what you need and (b) they're open-source. Maybe you can relax (a) if it's really easy to extend the codebase (I find this to be true for Devenv's Nix modules, for example.)
I loved the original promise of Dagger and it’s still 90% great.
But one flaw (IMO) that it can’t export artifacts and import into other steps without breaking the cache.
Eg if you provide monorepo as input, and then on some step narrow your build to one specific dir, then even when files change outside of that dir then caching still is invalidated.
Which makes it extremely verbose and maintenance nightmare to keep multiple narrow inputs and keep all those paths up to date.
You can filter input directories before they are loaded, to avoid this. There's no reason you shouldn't be able to get precise cache invalidation in a large monorepo. If you want, DM me on the Dagger Discord and I'll help you out!
I've tried it, but there's too much "sorry not in the open-source edition, please buy the entreprise edition" stuff all around, which makes it quite unusable
Works fine for us for glueing a bunch of CI steps together which would've been a pile of bash otherwise. Works well with depot.dev caches. We don't use it for anything AI either.
I use it to do builds in our monorepo. We got onboard before the LLM trash features. The base design is ok but there's things I'd do different today if I knew the build stuff would fade away for the LLM push.
I might be getting old but the videos are too fast for me to understand. Why can't they just put the full command text and the output of it instead of a video.
This looks interesting but I’m trying to understand it in more layman’s terms. Is it more about providing abstractions for llms to work within to do things?
I've never tried it. My first impression based purely on reading the homepage is it adds complexity to something I can already do with a Dockerfile and bash. What can it do that I can't already do more simply?
It does a pretty good job of caching and that does help speed up builds. I also run all of my end to end tests from it because I can coordinate secrets and clusters of containers through it.
If I understand correctly, this is essentially a more composable way to write Dockerfiles? That alone is a very welcome improvement. They would do themselves a big favor if they were more clear on that in their marketing, instead of boasting around the bush with all kinds of other terminology and claims of redefining foundations.
If I already have a Dockerfile that doesn’t need composition, how does this help me vs being a small cosmetic improvement over ”docker build” command line?
if you think dagger is a CI/CD enabler, it is, but its more – its module and function orchestration is a much more basic and first principle it embraces. The team has iterated from the CI/CD narrative to something much more powerful.
A lot of the comments here feel like they're disappointed that this is a "Docker with unnecessary LLM crap thrown in" when I think what they're really going for is more "LLM workflows with a higher degree of observability and sanity".
I think a more interesting point of comparison is the Claude Code Github Action, Co-Pilot code reviews, etc.
I think it started as some kind of CI/CD tools, then they jumped on the AI hype and they started to use it to make it possible to run agents in containers easily... perhaps to do automated actions on CI/CD pipelines that use agents (eg try to solve some minor bugs automatically when you push on a branch, etc)
Although I'm not sure if that's so much a value-added? It's not so hard to just create a container and launch an agent in it.
The whole interesting thing was to use actual programming languages for Docker build, which I think was what they initially tried to do, but now it's a bit incomprehensible... I guess conceptually Dagger relates to Dockerfile a bit like Pulumi related to Terraform?
usrme|2 months ago
I ended up opting for CUE and GitHub Actions, and I'm glad I did as it made everything much, much simpler.
tom1337|2 months ago
digdugdirk|2 months ago
esafak|2 months ago
flanked-evergl|2 months ago
sontek|2 months ago
Kinrany|2 months ago
Xiol|2 months ago
kjuulh|2 months ago
It serves a place where a dockerfile is not enough, and CI workflows are too difficult to debug or reason about.
I do have some current problems with it though:
1. I don't care at all about the LLM agent workflows, I get that it is possible, but the same people that chose dagger for what it was, is not the same audience that runs agents like that. I can't choose dagger currently, because I don't know if they align with my interests as an engineer solving a specific problems for where I work (delivering software, not running agents).
2. I advocated for modules before it was a thing, but I never implemented it. It is too much magic, I want to write code, not a DSL that looks like code, dagger is already special in that regard, to modules takes it a step too far. You can't find the code in their docs anymore, but dagger can be written with just a .go, .py or .rs file. Simply take in dagger as a dependency and build your workflow.
3. Too complex to operate, dagger doesn't have runners currently, and it is difficult to run a setup in production for CI yourself, without running it in the actions themselves, which can be disastrous for build times, as dagger often leads you into using quite a few images, so having a cache is a must.
Dagger needs to choose and execute; not having runners, even when we we're willing to throw money at them was a mistake IMO. Love the tool, the team, the vision but it is too distracted, magical and impatient to pick up at the moment.
shykes|2 months ago
1. Yes we got over-excited with the agent runtime use case. We stand by the LLM implementation because we never compromised on the integrity of Dagger's modular design. But our marketing and product priorities were all over the place. We're going to refocus on the original use case: helping you ship software, and more particularly building & testing it.
2. Modules have warts but they are essential. We will continue to improve them, and remain committed to them. Without this feature, you have to write a complete standalone program every time you want to build or test your software. It's too much overhead.
3. Yes you are right. We really thought we could co-exist with CI runners, and get good performance without reinventing the wheel. But for various reasons that turned out to not be the case. So we're going to ship vertically integrated runners, with excellent scalability and performance. DM me if you want early access :)
TLDR: yes we needed to choose and execute. We have, and we are.
Thank you again for the feedback.
stephen|2 months ago
I.e. declaratively setup a web of CI / deployment tasks, based on docker, with a code-first DSL, instead of the morass of copy-pasted (and yes orbs) CircleCI yaml files we have strewn about our internals repos.
But their DSL for defining your pipelines is ... golang? Like who would pick golang as "a friendly language for setting up configs".
The underlying tech is technically language-agnostic, just as aws-cdk's is (you can share cdk constructs across TypeScript/Python), but it's rooted in golang as the originating/first-class language, so imo will never hit aws-cdk levels of ergonomics.
That technical nit aside, I love the idea; ran a few examples of it a year or so ago and was really impressed with the speed; just couldn't wrap my around "how can I make this look like cdk".
esafak|2 months ago
oulipo2|2 months ago
jiehong|2 months ago
LeBit|2 months ago
What else could be used to abstract away your CICD from the launcher (Jenkins, Argo Workflows, GitHub Actions, etc.)?
shykes|2 months ago
moltar|2 months ago
Havoc|2 months ago
pxc|2 months ago
Then they completely abandoned not just the CUE frontend, but CUE altogether (while strenuously denying that they were doing so) for a GraphQL-based rewrite that focused on letting people use popular general-purpose languages to construct their workflows. The initial rollout of this was not feature complete and only supported imperative languages (Python and TypeScript, IIRC), which I didn't like.
Instead of porting everything over to all their new interfaces, I hopped off the train and rewrote all of our portable pipeline scripts in Nix, via Devenv. At the time, I'd never used Devenv before, but getting the work done that time still took maybe a tenth of the time or less. More than anything else, this was due to not having to fuck around with the additional overhead Docker entails (fussing with mount points, passing files from one stage to another, rebuilding images, setting up VMs... all of it). I got the reproducibility without the extra bullshit, and got to work with interfaces that have proven much more stable.
I still think there's a place for something like Dagger, focused just on CI, perhaps even still using Docker as a deployment/distribution strategy. But I no longer trust Dagger to execute on that. I think a proper external DSL (probably special-purposw but still Turing-complete, e.g., Nickel) is the right fit for this domain, and that it should support multiple means of achieving repeatability rather than just Docker (e.g., Nix on bare metal and Podman, to start). An option to work on bare metal via reproducible environment management tools like Nix, Guix, or Spack is a valuable alternative to burdensome approaches like containers.
I haven't looked at Dagger in several months, but the other big piece that is missing for portable CI workflows is a library that abstracts over popular CI platforms so you can easily configure pull/merge request pipelines without worrying about the implementation details like what environment variables each platform exposes to indicate source and target branch.
Idk anything about all the AI horseshit; I was off the Dagger bandwagon before they took that turn. I don't know if it's serious or a nominal play to court investors. But that kind of pivot is another reason not to build core infra on top of the work of startups imo. If the product is 70% of what you want, you have no way of knowing whether filling that 30% gap is something the maintainers will suddenly pivot away from, even if their current direction looks aligned with yours.
I'd recommend considering tools in this space only if (a) they're already close to 100% of what you need and (b) they're open-source. Maybe you can relax (a) if it's really easy to extend the codebase (I find this to be true for Devenv's Nix modules, for example.)
junon|2 months ago
Then... it wasn't. The more I read the less I ever want to see this again. The LLM train has got to end at some point.
moltar|2 months ago
But one flaw (IMO) that it can’t export artifacts and import into other steps without breaking the cache.
Eg if you provide monorepo as input, and then on some step narrow your build to one specific dir, then even when files change outside of that dir then caching still is invalidated.
Which makes it extremely verbose and maintenance nightmare to keep multiple narrow inputs and keep all those paths up to date.
shykes|2 months ago
sgammon|2 months ago
nozzlegear|2 months ago
/s I've never heard of Dagger the DI framework but I have heard of this Dagger. Names will overlap sometimes and it's not a big deal.
isuckatcoding|2 months ago
techn00|2 months ago
leetrout|2 months ago
They don't seem to have jumped for AI hype (yet?)...
https://www.windmill.dev/
oulipo2|2 months ago
someguy101010|2 months ago
jiehong|2 months ago
Without the LLM bits, this is basically like Bazel or buck2, right?
dilyevsky|2 months ago
esafak|2 months ago
lowmagnet|2 months ago
bflesch|2 months ago
tajd|2 months ago
jiehong|2 months ago
But, the marketing heavily focuses on LLM stuff to the point of making everyone confused.
mayhemducks|2 months ago
lowmagnet|2 months ago
esafak|2 months ago
Too|2 months ago
If I already have a Dockerfile that doesn’t need composition, how does this help me vs being a small cosmetic improvement over ”docker build” command line?
0x001D|2 months ago
michaelbuckbee|2 months ago
I think a more interesting point of comparison is the Claude Code Github Action, Co-Pilot code reviews, etc.
vivzkestrel|2 months ago
oulipo2|2 months ago
Although I'm not sure if that's so much a value-added? It's not so hard to just create a container and launch an agent in it.
The whole interesting thing was to use actual programming languages for Docker build, which I think was what they initially tried to do, but now it's a bit incomprehensible... I guess conceptually Dagger relates to Dockerfile a bit like Pulumi related to Terraform?