top | item 46932077

(no title)

kylegalbraith | 22 days ago

What’s the security situation around OpenClaw today? It was just a week or two ago that there was a ton of concern around its security given how much access you give it.

discuss

order

mcintyre1994|22 days ago

I don’t think there’s any solution to what SimonW calls the lethal trifecta with it, so I’d say that’s still pretty impossible.

I saw on The Verve that they partnered with the company that repeatedly disclosed security vulnerabilities to try to make skills more secure though which is interesting: https://openclaw.ai/blog/virustotal-partnership

I’m guessing most of that malware was really obvious, people just weren’t looking, so it’s probably found a lot. But I also suspect it’s essentially impossible to actually reliably find malware in LLM skills by using an LLM.

veganmosfet|22 days ago

Regarding prompt injection: it's possible to reduce the risk dramatically by: 1. Using opus4.6 or gpt5.2 (frontier models, better safety). These models are paranoid. 2. Restrict downstream tool usage and permissions for each agentic use case (programmatically, not as LLM instructions). 3. Avoid adding untrusted content in "user" or "system" channels - only use "tool". Adding tags like "Warning: Untrusted content" can help a bit, but remember command injection techniques ;-) 4. Harden the system according to state of the art security. 5. Test with red teaming mindset.

madeofpalk|22 days ago

Honestly, 'malware' is just the beginning it's combining prompt injection with access to sensitive systems and write access to 'the internet' is the part that scares me about this.

I never want to be one wayward email away from an AI tool dumping my company's entire slack history into a public github issue.

ricardobayes|22 days ago

Can only reasonably be described as "shitshow".

bowsamic|22 days ago

Many companies have totally banned it. For example at Qt it is banned on all company devices and networks

kolja005|22 days ago

My company has the github page for it blocked. They block lots of AI-related things but that's the only one I've seen where they straight up blocked viewing the source code for it at work.