top | item 46586893

(no title)

dividedbyzero | 1 month ago

They don't even really do that IME. If I ask Claude or ChatGPT to generate terraform for non-trivial but by no means obscure or highly unusual setups, they almost invariably hallucinate part of the answer even if a documented solution exists that isn't even that difficult. Maybe vibe coding JavaScript is that much better, or I'm just hopeless at prompting, but I feel a few dozen lines of fairly straightforward terraform config shouldn't require elaborate prompt setups, or I can just save some brain cycles by writing it myself.

discuss

order

JohnMakin|1 month ago

For better or for worse have spent a large amount of time in terraform since 0.13 and I can confidently say LLM's are very, very bad at it. My favorite is when it invents internal functions (that look suspiciously like python) that do not exist, even when corrected, it will still keep going back to them. A year or two ago there were bad problems with hallucinated resource field names but I haven't seen that as much these days.

It however, is pretty good at refactoring given a set of constraints and an existing code base. It is decent at spitting out boilerplate code for well-known resources (such as AWS), but then again, those boilerplate examples are mostly coming straight from the documentation. The nice thing about refactoring with LLM's in terraform is, even if you vibe it, the refactor is trivially verifiable because the plan should show no changes, or the exact changes you would expect.