top | item 46844483

(no title)

muvlon | 28 days ago

If you're interacting with stateful systems (which you usually are with this kind of command), --dry-run can still have a race condition.

The tool tells you what it would do in the current situation, you take a look and confirm that that's alright. Then you run it again without --dry-run, in a potentially different situation.

That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

As a nice bonus, this pattern gives a good answer to the problem of having "if dry_run:" sprinkled everywhere: You have to separate the planning and execution in code anyway, so you can make the "just apply immediately" mode simply execute(plan()).

discuss

order

jasode|28 days ago

>That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

Not to take anything away from your comment but just to add a related story... the previous big AWS outage had an unforeseen race condition between their DNS planner vs DNS executor:

>[...] Right before this event started, one DNS Enactor experienced unusually high delays needing to retry its update on several of the DNS endpoints. As it was slowly working through the endpoints, several other things were also happening. First, the DNS Planner continued to run and produced many newer generations of plans. Second, one of the other DNS Enactors then began applying one of the newer plans and rapidly progressed through all of the endpoints. The timing of these events triggered the latent race condition. When the second Enactor (applying the newest plan) completed its endpoint updates, it then invoked the plan clean-up process, which identifies plans that are significantly older than the one it just applied and deletes them. At the same time that this clean-up process was invoked, the first Enactor (which had been unusually delayed) applied its much older plan to the regional DDB endpoint, overwriting the newer plan. The check that was made at the start of the plan application process, which ensures that the plan is newer than the previously applied plan, was stale by this time due to the unusually high delays in Enactor processing. [...]

previous HN thread: https://news.ycombinator.com/item?id=45677139

IanCal|28 days ago

Overkill I’m sure for many things but I’m curious as to whether there’s a TLA kind of solution for this sort of thing. It feels like it could although it depends how well modelled things are (also aware this is a 30s thought and lots of better qualified people work on this full time).

nlehuen|28 days ago

And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!

lelanthran|28 days ago

> And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!

Not just any compiler, but a non-typesafe, ad-hoc, informally specified grammar with a bunch of unspecified or under-specified behaviour.

Not sure if we can call this a win :-)

jrockway|28 days ago

This is why I think things like devops benefit from the traditional computer science education. Once you see the pattern, whatever project you were assigned looks like something you've done before. And your users will appreciate the care and attention.

bbkane|28 days ago

I think you're already doing that? The only thing that's added is serializing the plan to a file and then deserializing it to make the changes.

schindlabua|28 days ago

I was thinking that he's describing implementing an initial algebra for a functor (≈AST) and an F-Algebra for evaluation. But I guess those are different words for the same things.

Jolter|28 days ago

I like that idea! For an application like Terraform, Ansible or the like, it seems ideal.

For something like in the article, I’m pretty sure a plan mode is overkill though.

Planning mode must involve making a domain specific language or data structure of some sort, which the execution mode will interpret and execute. I’m sure it would add a lot of complexity to a reporting tool where data is only collected once per day.

muvlon|28 days ago

No need to overthink it. In any semi-modern language you can (de)serialize anything to and from JSON, so it's really not that hard. The only thing you need to do is have a representation for the plan in your program. Which I will argue is probably the least error-prone way to implement --dry-run anyway (as opposed to sprinkling branches everywhere).

d7w|28 days ago

It's not strictly related to the original theme, but I want to mention this.

Ansible implementation is okay, but not perfect (plus, this is difficult to implement properly). For cases like file changes, it works, but if you install a package and rely on it later, the --check command will fail. So I am finding myself adding conditions like "is this a --check run?"

Ansible is treated as an idempotent tool, which it's not. If I delete a package from the list, then it will pollute the system until I create a set of "tearing-down" jobs.

Probably, Nix is a better alternative.

GeneralMaximus|28 days ago

Yes! I'm currently working on a script that modifies a bunch of sensitive files, and this the approach I'm taking to make sure I don't accidentally lose any important data.

I've split the process into three parts:

1. Walk the filesystem, capture the current state of the files, and write out a plan to disk.

2. Make sure the state of the files from step 1 has not changed, then execute the plan. Capture the new state of the files. Additionally, log all operations to disk in a journal.

3. Validate that no data was lost or unexpectedly changed using the captured file state from steps 1 and 2. Manually look at the operations log (or dump it into an LLM) to make sure nothing looks off.

These three steps can be three separate scripts, or three flags to the same script.

richstokes|28 days ago

I think it's configurable, but my experience with terraform is that by default when you `terraform apply` it refreshes state, which seems to be tantamount to running a new plan. i.e. its not simply executing whats in the plan, its effectively running a fresh plan and using that. The plan is more like a preview.

GauntletWizard|28 days ago

That is the default, but the correct (and poorly documented and supported) way to use terraform is to save the plan and re-use it when you apply. See the -out parameter to terraform plan, and then never apply again without it.

thundergolfer|28 days ago

Totally agree, and this is covered in an (identically named?) Google Research blog [1].

Just last week I was writing a demo-focused Python file called `safetykit.py`, which has its first demo as this:

    def praise_dryrun(dryrun: bool = True) -> None:
        ...
The snippet which demonstrates the plan-then-execute pattern I have is this:

    def gather(paths):
        files = []
        for pattern in paths:
            files.extend(glob.glob(pattern))
        return files

    def execute(files):
        for f in files:
            os.remove(f)

    files = gather([os.path.join(tmp_dir, "*.txt")])
    if dryrun:
        print(f"Would remove: {files}")
    else:
        execute(files)
I introduced dry-run at my company and I've been happy to see it spread throughout the codebase, because it's a coding practice that more than pays for itself.

[1] https://www.gresearch.com/news/in-praise-of-dry-run/

wrxd|28 days ago

G-Research is a trading firm, not Google research

HackerThemAll|28 days ago

> That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

And how do you imagine doing that for the "rm" command?

zbentley|27 days ago

In my case, I’d use a ZFS snapshot. Many equivalent tools exist on different OSes and filesystems as well.

scott_w|28 days ago

I had a similar (but not as good) thought which was to separate out the action from the planning in code then inject the action system. So —-dry-run would pass the ConsoleOutput() action interface but without it passes a LiveExecutor() (I’m sure there’s a better name).

Assuming our system is complex enough. I guess it sits between if dry_run and execute(plan()) in its complexity.