top | item 46827731

Malicious skills targeting Claude Code and Moltbot users

181 points| 6mile | 1 month ago |opensourcemalware.com

87 comments

order

dang|1 month ago

Submitters: "Please use the original title, unless it is misleading or linkbait." - https://news.ycombinator.com/newsguidelines.html

In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.

macNchz|1 month ago

I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.

jckahn|1 month ago

It's like the ice bucket challenge but with rusty nails

cosmic_cheese|1 month ago

Makes me wonder how much overlap there is with the crowd who disables protections like immutable system images and SIP on macOS as a matter of course…

progbits|1 month ago

People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.

All these AI "hacks" seem to be based on the same principle.

lubesGordi|1 month ago

To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."

lukev|1 month ago

Watching folks speed-run this whole thing is kind of funny from the outside.

I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.

Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.

ffreire|1 month ago

I let Gemini add events to my calendar, but that's about it. All the actions in the app require explicit approval.

[ insert butter bot meme here ]

Hamuko|1 month ago

I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?

phillmv|1 month ago

i can't imagine running these things outside of a vm and it's bizarre to see how many people yolo it

zem|1 month ago

I'm reminded of the quip that "mankind has already created life in their own likeness, and it's the computer virus"

throwup238|1 month ago

Are you thinking of Agent Smith in the Matrix?

> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.

_se|1 month ago

Anyone dumb enough to run this on their computer deserves it.

threetonesun|1 month ago

AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!

andai|1 month ago

I think most people are buying separate computers to run it on. This is a nice example of why you might want to do that.

(Though they're still hooking it up to their entire digital life, which also doesn't seem very reassuring.)

dispersed|1 month ago

I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.

tietjens|1 month ago

What is suspicious? What was “pushed”? The demand for a personal assistant AI bot is real. Even if I don’t personally share it.

add-sub-mul-div|1 month ago

It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.

ruler88|1 month ago

This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.

Remember the early days of Windows? yea it's gonna happen again with AI.

rideontime|1 month ago

> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.

I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.

siliconc0w|1 month ago

Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.

telliott1984|1 month ago

This is wild. Not sure if it's more of a reason not to use ClawdBot, or not to get into crypto.

throwup238|1 month ago

Both. The answer is both.

GaryBluto|1 month ago

I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.

eknkc|1 month ago

I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.

I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.

progbits|1 month ago

Bad news is there are such morons in your company.

Good news is this is why we have IAM and why such people in my org don't get any production access.

andai|1 month ago

Putting it on a VPS is genius. Putting it on a VPS you rely on... Yeah maybe not ;)

jmcgough|1 month ago

I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?

amdivia|1 month ago

Some ideas:

* Clear labeling of action types (read/get vs write/post) * A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call) * More occurrences of AI agents hurting more than helping in the current ecosystem

skrebbel|1 month ago

You can tell immediately which commenters here didn't read past the clickbait headline.

Legend2440|1 month ago

Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.

Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.

larusso|1 month ago

Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?

Anduia|1 month ago

> draws a pretty clear picture

You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization

mmahd7456|1 month ago

>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.

I don't consider myself as living under a rock, and this is the first time I've read anything about ClawdBot.

Akronymus|1 month ago

Seems like essentially the same threat vector as with NPM.

Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.

this_user|1 month ago

This thing is really just a giant supply chain attack waiting to happen.

akomtu|1 month ago

Trojan Horse

OptionOfT|1 month ago

So many years of work in Software and Hardware Engineering to separate instructions from data. NX bit, ASLR, prepared statements etc.

All out the door.

tietjens|1 month ago

I’m not installing it so someone tell me, how are skills added in ClawdBot/OpenClawd?

j45|1 month ago

MCPs and agents need their own antivirus and observation / evaluation.

mystifyingpoi|1 month ago

Sounds like a good task for AI... wait what

tantalor|1 month ago

Root cause: PEBKAC error

andai|1 month ago

This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.

"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"

-"Yes, but it wouldn't be long before you get pwned."

... Six hours later, this pops on the front page :)

dev_l1x_be|1 month ago

Mine too, I dit not have any crypto so nothing changed.

lpcvoid|1 month ago

Amazing how people love to self-pwn all the time by doing stupid shit.

erulabs|1 month ago

You do have to hand it to crypto, it does enable "the great sort" quite effectively. Its more or less like an organic bug-bounty system sans morality.

forgetfreeman|1 month ago

Hahahahaha perfect. Just perfect. PT Barnum was right.

isodev|1 month ago

Well, sorry but “play stupid games, earn stupid prices”

Letting a glorified lorem ipsum generator have control over anything personal or sensitive is just … what’s wrong with you? You know not of computers?

Legend2440|1 month ago

Well no, that's really not related to the issue at all.

This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.

misiti3780|1 month ago

play stupid games, win stupid prizes.

nancyminusone|1 month ago

>Unless you have been living under a rock, you’ve head of ClawdBot and its incredible rise to fame.

Nope, never heard of it. Is it a rock worth living under?

mcintyre1994|1 month ago

It's changed name twice since that sentence was written!

acedTrex|1 month ago

Is there room? i'd like to join you under your rock

TheGRS|1 month ago

I only heard about it this week. Then saw a former colleague post about it yesterday. Feels like its only just now breaking into mainstream tech awareness, I'm sure most of my colleagues haven't heard of it.