(no title)
kahnclusions | 3 months ago
It’s a similar reason why they can never be trusted to handle user input.
They are probabilistic generators and have no real delineation between system instructions and user input.
It’s like I wrote a JavaScript function where I concatenated the function parameters together with the function body, passed it to eval() and said YOLO.
viraptor|3 months ago
Sandboxing. LLM shouldn't be able to run actions affecting anything outside of your project. And ideally the results should autocommit outside of that directory. Then you can yolo as much as you want.
smaudet|3 months ago
I.e. this is just not safe, period.
"I stuck it outside the sandbox because it told me how, and it murdered my dog!"
Seems somewhat inevitable result of trying to misapply this particular control to it...
gausswho|3 months ago
dfedbeef|3 months ago