top | item 36653399

Show HN: Digger – Open Source Terraform automation and collaboration tool

91 points| ujnproduct | 2 years ago |github.com

23 comments

order

thinkmassive|2 years ago

One of the major security issues with running terraform in your CI/CD pipeline is that it usually needs admin permissions to your entire cloud environment. To avoid this you need the pipeline to pass parameters to an internal process that actually applies the changes.

Digger makes it sound like it might address this:

> Digger runs terraform natively in your CI. This is: Secure, because cloud access secrets aren't shared with a third-party

From the Github+AWS demo:

> 4. Add environment variables into your Github Action Secrets (cloud keys are a requirement since digger needs to connect to your account for coordinating locks) AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY

It sure looks like AWS admin credentials are shared with Github, and also available to anything else in the diggerhq/digger action.

sausagefeet|2 years ago

> It sure looks like AWS admin credentials are shared with Github, and also available to anything else in the diggerhq/digger action

I am a co-founder of Terrateam[0] which is a Terraform CI/CD as well. At the end of the day, you need to execute something to do these operations and having this component open source is important for auditing purposes. For Terrateam, we lean heavily into GitHub Actions so GitHub is at least managing any secrets and runs. One challenge is users could pin the Action that we publish to a specific version, but we also update it regularly and communicating to customers to update it is a challenge.

[0] https://terrateam.io

oneplane|2 years ago

The only IAM-safe way is to run context-aware terraform plans so the environments cannot ever CRUD out of scope. For example, an application-centric approach might use an ABAC constraint and temporary credentials (perhaps via OIDC, but most OIDC integrations lack local privilege separation; instance roles are far more secure) and making sure events are bound to the environment they are allowed to be executed in.

This does require something that should essentially be embedded in your environment or account vending machine, otherwise it becomes very cumbersome to maintain.

evantbyrne|2 years ago

Any CD is going to require some kind of authentication key. To minimize the surface area of a potential leak, create a user in AWS for the tool, only grant it access to the resources needed, and then create a key for that user to place in your CI. You should also enable audit trails in your AWS account so you can monitor for unusual activity.

oneplane|2 years ago

I'm surprised nobody has mentioned Atlantis yet. Running bare terraform in CI is a bad idea (to the extent that running an 'expect' script for an interactive tool is a bad idea), and when you consider the impact it can have (both on resources and on escalation) it should be out-of-band anyway.

izalutski|2 years ago

Atlantis was a great tool back in the day and still works well in most scenarios. The main issue with it is that it also takes on running the jobs (as in Terraform binary runs on the same VM it runs). Which makes it similar to Jenkins and other first-generation CI systems.

Companies that use Atlantis at scale (eg Lyft) felt the need to fork it and use a scalable compute backend instead, eg Temporal. At which point you've basically got a DIY in-house CI.

Our view is that it's best to keep matters separate. The CI part with compute, jobs, logs etc is a solved problem. What's unsolved for Terraform is state-aware logic when / how to run those jobs. It's all about the orchestrator really.

lantry|2 years ago

I was initially very interested in digger, because I need something like atlantis but the thought of a web-accesible server with owner-level access to my project seemed scary. Having everything in cicd seemed like a great solution. However, when I read the digger docs, I discovered that it too has a publicly accessible server, that gets autodeployed when you first run digger.

1. I don't like the idea of the tool creating resources I didn't explicitly tell it to create

2. I don't like the idea of a public endpoint for someone to pwn and get owner-level access of all my stuff.

It would be nice if the docs explained what the serverless backend thing does (besides the vague comment about handling webhooks), and it would be nice if there was an option that didn't require the public backend even if it means slightly degraded functionality. (github actions can be triggered by PR opened, PR updated, comment created, comment edited, merge to main, and many other things. Seems to me like that should be enough?)

https://docs.digger.dev/readme/how-it-works

izalutski|2 years ago

All valid points! Thank you!

We were initially completely backend-less; but then it increasingly became apparent that a central orchestrator is unavoidable.

Rationale here: https://diggerdev.notion.site/Why-digger-introduces-an-orche...

In hindsight, it makes sense that literally every single other tool in the space has a central backend that orchestrates jobs. There's a good reason for that.

To address security / access concerns, you can either self-host the orchestrator, or use OIDC, or both

ushakov|2 years ago

I misread it as Dagger, the CI/CD tool (https://dagger.io)

izalutski|2 years ago

Yeah naming is fun

The most fun thing is - Digger + Dagger could be a great combo! We haven't yet explored properly but in theory it shouldn't be anything different from adding another CI provider; we already support GitHub Actions, Gitlab CI and Azure DevOps

_0xdd|2 years ago

I saw Digger and got excited for a second...

https://en.wikipedia.org/wiki/Digger_(video_game)

iKlsR|2 years ago

I don't know how but in the seconds it took me to read the title and click the link my brain went down this crazy hole of a 3d tool that could take some point cloud data or image scans and allow you to "dig" thru virtual earth and shape the land or something.. oh boy.

seedie|2 years ago

One of the main reasons for us to use a terraform collaboration tool is to easily manage state files.

Would be awesome if they find a way to integrate state management.

izalutski|2 years ago

Thanks!! Great point; for now we're relying on S3+dynamo which many people prefer anyways; but state management is on the roadmap, we'll get to it soon

Tracking here: https://github.com/diggerhq/digger/issues/206

And btw contributions very welcome, we're a small team so every bit helps, even if it's just filing or labeling an issue