(no title)
abrichr | 5 months ago
I’ve found LLMs just as useful for the "thankless" layers (e.g. tests, docs, deployment).
The real failure mode is letting AI flood the repo with half-baked abstractions without a playbook. It's helpful to have the model review the existing code and plan out the approach before writing any new code.
The leverage may be in using LLMs more systematically across the lifecycle, including the grunt work the author says remains human-only.
kevin42|5 months ago
It's also great for things that aren't creative, like 'implement a unit test framework using google test and cmake, but don't actually write the tests yet'. That type of thing saves me hours and hours. It's something I rarely do, so it's not like I just start editing my cmake and test files, I'd be looking up documentation, and a lot of code that is necessary, but takes a lot of time.
With LLMs, I usually get what I want quickly. If it's not what I want, a bit of time reviewing what it did and where it went wrong usually tells me what I need to give it a better prompt.