top | item 46655730

(no title)

cuu508 | 1 month ago

IMO it's completely the other way around.

Shell scripts can be audited. The average user may not do it due to laziness and/or ignorance, but it is perfectly doable.

On the other hand, how do you make sure your LLM, a non-deterministic black box, will not misinterpret the instructions in some freak accident?

discuss

order

nobodywillobsrv|1 month ago

How about both worlds?

Instead of asking the agent to execute it for you, you ask the agent to write an install.sh based on the install.md?

Then you can both audit whatever you want before running or not.

chme|1 month ago

So... What you are saying is that we don't need 'install.md'. Because a developer can just use a LLM to generate a 'install.sh', validate that, and put it into the repo?

Good idea. That seems sensible.

Bonus: LLM is only used once, not every time anyone wants to install some software. With some risks of having to regenerate, because the output was nonsensical.

franga2000|1 month ago

And since LLM tokens are expensive and generation is slow, how about we cache that generated code on the server side, so people can just download the pre-generated install.sh? And since not everyone can be bothered to audit LLM code, the publisher can audit and correct it before publishing, so we're effectively caching and deduplicating the auditing work too.

catlifeonmars|1 month ago

This is much better. Plus you get reproducibility and can leverage the AI for more repeat performances without expending more tokens.

vrighter|1 month ago

then how about you cut out the llm middleman and just audit the bash scripts already provided?