top | item 43812056

(no title)

Fosowl | 10 months ago

No it can't because we check the bash the AI try to execute against a list of pattern for dangerous command. Also all commands are executed within a folder specified in the configuration file, so that you can choose which files it has access to. However, we currently have no containerization meaning that code execution unlike bash could be harmful. I do think about improving the safety by running all code/commands within a docker and then having some kind of file transfer upon user validation once a task is done.

discuss

order

rank0|10 months ago

I guarantee you these controls are breakable the way you describe them.

Thats okay though! I realize this is a prototype/hobbyist solution which is unlikely to be attacked by a skilled adversary. Love the project!

If later on you want this to become safe for sensitive workloads you need to be way less confident. Just my 2ยข.

Fosowl|10 months ago

I know, it's for local use, it's not hosted anywhere so the only adversary is yourself :)

hansmayer|10 months ago

What if the agent were to create an alias to 'rm -rf' on my machine? I guess that would not have been blocked by your blacklist, right?

Fosowl|10 months ago

Well it can't use text editor, so it would have to use echo 'rm -rf' with a shell redirection to a file, which would be detected.