I’d treat agent “skills” the same way you’d treat running a random Docker image / npm package: the default stance should be distrust unless you’ve reviewed it or you trust the maintainer.
A few practical reasons people still install them:
- Many skills are thin wrappers around an API (small surface area) and are easy to audit.
- You can run OpenClaw with least-privilege: only enable the tools/skills you actually need, use throwaway API keys/accounts, and avoid giving it file/terminal access unless you’re comfortable with it.
- Isolation helps: run the gateway in a container/VM, separate user accounts, and keep secrets scoped per-skill.
Verification is nice, but the security model should assume skills can be malicious, and keep the blast radius small.
As far as I know, most current skills are built using artificial intelligence (AI), and OpenClaw also has a verification process, but I find it insufficient. And most of the more than 100,000 skills on GitHub don't have any secure verification processes. So what makes people install them?
PranayKumarJain|20 days ago
A few practical reasons people still install them: - Many skills are thin wrappers around an API (small surface area) and are easy to audit. - You can run OpenClaw with least-privilege: only enable the tools/skills you actually need, use throwaway API keys/accounts, and avoid giving it file/terminal access unless you’re comfortable with it. - Isolation helps: run the gateway in a container/VM, separate user accounts, and keep secrets scoped per-skill.
Verification is nice, but the security model should assume skills can be malicious, and keep the blast radius small.
doanbactam|24 days ago