top | item 46956736

(no title)

clarionbell | 20 days ago

Anyone with a decent grasp of how this technology works, and a healthy inclination to skepticism, was not awed by Moltbook.

Putting aside how incredibly easy it is to set up an agent, or several, to create impressive looking discussion there, simply by putting the right story hooks in their prompts. The whole thing is a security nightmare.

People are setting agents up, giving them access to secrets, payment details, keys to the kingdom. Then they hook them to the internet, plugging in services and tools, with no vetting or accountability. And since that is not enough, now the put them in roleplaying sandbox, because that's what this is, and let them run wild.

Prompt injections are hilariously simple. I'd say the most difficult part is to find a target that can actually deliver some value. Moltbook largely solved this problem, because these agents are relatively likely to have access to valuable things, and now you can hit many of them, at the same time.

I won't even go into how wasteful this whole, social media for agents, thing is.

In general, bots writing each other on mock reddit, isn't something the loose sleep over. The moment agents start sharing their embeddings, not just generated tokens online, that's the point when we should consider worrying.

discuss

order

cedws|19 days ago

I’m in awe at the complete lack of critical thinking skills. Did people seriously believe LLMs were becoming self aware or something? Didn’t even consider the possibility it was all just a big show being puppeted by humans for hype and clicks? No wonder the AI hype has reached this level of hysteria.

manugo4|20 days ago

Karpathy seemed pretty awed though

clarionbell|20 days ago

He would be among those who lack "healthy inclination to skepticism" in my book. I do not doubt his brilliance. Personally, I think he is more intelligent than I am.

But, I do have a distinct feeling that his enthusiasm can overwhelm his critical faculties. Still, that isn't exactly rare in our circles.

louiereederson|19 days ago

I think these people are just as prone to behavioral biases as the rest of us. This is not a problem per se, it's just that it is difficult to interpret what is happening right now and what will happen, which creates an overreliance on the opinions of the few people closely involved. I'm sure given the pace of change and the perception that this is history-changing is impacting peoples' judgment. The unusual focus on their opinions can't be helping either. Ideally people are factoring this into their claims and predictions, but it doesn't seem like that's the case all the time.

dcchambers|19 days ago

To be honest it's pretty embarrassing how he got sucked into the Moltbook hype.

stronglikedan|19 days ago

He's biased. He needed it to be real. He has a vested interest in these sorts of things panning out.

zx8080|19 days ago

It's just about money.

ayhanfuat|20 days ago

This was his explanation for anyone interested:

> I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over".

> To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk.

> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.

> This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live.

> TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.

https://x.com/karpathy/status/2017442712388309406

belter|19 days ago

So was he also with FSD...

red75prime|20 days ago

> and let them run wild.

Yep, that's the most worrying part. For now, at least.

> The moment agents start sharing their embeddings

Embedding is just a model-dependent compressed representation of a context window. It's not that different from sharing a compressed and encrypted text.

Sharing add-on networks (LLM adapters) that encapsulate functionality would be more worrying (for locally run models).

bondarchuk|19 days ago

Previously sharing compressed and encrypted text was always done between humans. When autonomous intelligences start doing it it could be a different matter.

jmalicki|19 days ago

What do you think the entire issue was with supply chain attacks of skills moltbook was installing? Those skills were downloading rootkits to steal crypto.

stronglikedan|19 days ago

Nit, but I bet a quick proofread would have eliminated most of those awkward commas.

spruce_tips|19 days ago

sorry - what do you mean by embeddings in your last sentence?

rco8786|19 days ago

Not OP. But embeddings are the internal matrix representations of tokens that LLMs use to do their work. If tokens are the native language that humans use, embeddings are the native language that LLMs use.

OP, I think, is saying that once LLMs start communicating natively without tokens is when they shed the need for humans or human-level communication.

Not sure I 100% agree, because embeddings from one LLM are not (currently) understood by another LLM and tokens provide a convenient translation layer. But I think there's some grain of truth to what they're saying.

lm28469|19 days ago

> Anyone with a decent grasp of how this technology works, and a healthy inclination to skepticism, was not awed by Moltbook.

NPCs are definitely tricked by the smoke and mirrors though. I don't think most people on HN actually understand how non tech people (90%+ of llms users) interact with these things, it's terrifying.