This just seems like the logical consequence of the chosen system to be honest. "Skills" as a concept are much too broad and much too free-form to have any chance of being secure. Security has also been obviously secondary in the OpenClaw saga so far, with users just giving it full permissions to their entire machine and hoping for the best. Hopefully some of this will rekindle ideas that are decades old at this point (you know, considering security and having permission levels and so forth), but I honestly have my doubts.
vlovich123|24 days ago
Sandboxing and permissions may help some, but when you have self modifying code that the user is trying to get to impersonate them, it’s a new challenge existing mechanisms have not seen before. Additionally, users don’t even know the consequences of an action. Hell, even curated and non curated app stores have security and malware difficulties. Pretending it’s a solved problem with existing solutions doesn’t help us move forward.
nemomarx|24 days ago
That seems bad, but if you're also having your bot read unsanitized stuff like emails or websites I think there's a much larger problem with the security model
plagiarist|24 days ago
codefreakxff|24 days ago
jihadjihad|24 days ago
s/OpenClaw/LLM/g
clarity_hacker|24 days ago
[deleted]