In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.
People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.
All these AI "hacks" seem to be based on the same principle.
To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."
Watching folks speed-run this whole thing is kind of funny from the outside.
I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?
> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!
I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.
It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.
This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.
Remember the early days of Windows? yea it's gonna happen again with AI.
> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.
I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.
Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.
I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.
I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.
I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.
I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?
* Clear labeling of action types (read/get vs write/post)
* A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call)
* More occurrences of AI agents hurting more than helping in the current ecosystem
Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.
Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.
Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?
You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization
Seems like essentially the same threat vector as with NPM.
Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.
This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.
"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"
-"Yes, but it wouldn't be long before you get pwned."
... Six hours later, this pops on the front page :)
Well no, that's really not related to the issue at all.
This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.
I only heard about it this week. Then saw a former colleague post about it yesterday. Feels like its only just now breaking into mainstream tech awareness, I'm sure most of my colleagues haven't heard of it.
dang|1 month ago
In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
macNchz|1 month ago
jckahn|1 month ago
cosmic_cheese|1 month ago
progbits|1 month ago
All these AI "hacks" seem to be based on the same principle.
lubesGordi|1 month ago
lukev|1 month ago
I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
ffreire|1 month ago
[ insert butter bot meme here ]
Hamuko|1 month ago
phillmv|1 month ago
esaym|1 month ago
zem|1 month ago
throwup238|1 month ago
> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
_se|1 month ago
threetonesun|1 month ago
andai|1 month ago
(Though they're still hooking it up to their entire digital life, which also doesn't seem very reassuring.)
seanhunter|1 month ago
https://www.youtube.com/watch?v=vc6J-YlncIU
dispersed|1 month ago
tietjens|1 month ago
add-sub-mul-div|1 month ago
ruler88|1 month ago
Remember the early days of Windows? yea it's gonna happen again with AI.
rideontime|1 month ago
I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.
siliconc0w|1 month ago
rvz|1 month ago
1. Predictable. [0]
2. So that is why all those moltys were panicking earlier. [1]
[0] https://news.ycombinator.com/item?id=46788560
[1] https://news.ycombinator.com/item?id=46820962
telliott1984|1 month ago
throwup238|1 month ago
unknown|1 month ago
[deleted]
GaryBluto|1 month ago
eknkc|1 month ago
I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.
progbits|1 month ago
Good news is this is why we have IAM and why such people in my org don't get any production access.
andai|1 month ago
jmcgough|1 month ago
amdivia|1 month ago
* Clear labeling of action types (read/get vs write/post) * A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call) * More occurrences of AI agents hurting more than helping in the current ecosystem
skrebbel|1 month ago
Legend2440|1 month ago
Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.
larusso|1 month ago
Anduia|1 month ago
You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization
unknown|1 month ago
[deleted]
mmahd7456|1 month ago
I don't consider myself as living under a rock, and this is the first time I've read anything about ClawdBot.
Akronymus|1 month ago
Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.
this_user|1 month ago
akomtu|1 month ago
OptionOfT|1 month ago
All out the door.
tietjens|1 month ago
j45|1 month ago
mystifyingpoi|1 month ago
tantalor|1 month ago
andai|1 month ago
"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"
-"Yes, but it wouldn't be long before you get pwned."
... Six hours later, this pops on the front page :)
dev_l1x_be|1 month ago
m-hodges|1 month ago
lpcvoid|1 month ago
erulabs|1 month ago
forgetfreeman|1 month ago
isodev|1 month ago
Letting a glorified lorem ipsum generator have control over anything personal or sensitive is just … what’s wrong with you? You know not of computers?
Legend2440|1 month ago
This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.
misiti3780|1 month ago
nancyminusone|1 month ago
Nope, never heard of it. Is it a rock worth living under?
mcintyre1994|1 month ago
acedTrex|1 month ago
TheGRS|1 month ago
lifetimerubyist|1 month ago
[deleted]
vitrealis|1 month ago