(no title)
ogrisel | 3 months ago
We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.
ogrisel | 3 months ago
We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.
cube2222|3 months ago
In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.
[0]: https://code.claude.com/docs/en/sandboxing
BrenBarn|3 months ago
jpc0|3 months ago
It's perfectly within the capabilities of the car to do so.
The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.
For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.
nkrisc|3 months ago
Google (and others) are (in my opinion) flirting with false advertising with how they advertise the capabilities of these "AI"s to mainstream audiences.
At the same time, the user is responsible for their device and what code and programs they choose to run on it, and any outcomes as a result of their actions are their responsibility.
Hopefully they've learned that you can't trust everything a big corporation tells you about their products.
Zigurd|3 months ago
LLM makers that make this kind of thing possible share the blame. It wouldn't take a lot of manual functional testing to find this bug. And it is a bug. It's unsafe for users. But it's unsafe in a way that doesn't call for a law. Just like rm -rf * did not need a law.
pas|3 months ago
sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one
chickensong|3 months ago