top | item 46836415

Moltbook is a bad takeoff scenario where human psychology itself is the exploit

42 points| lebek | 2 months ago |twitter.com | reply

15 comments

order
[+] bluehex|2 months ago|reply
I keep seeing people dismiss this as an exaggerated danger because the bots are only pretending to be sentient and we're a long way off from AGI. The whole sentience debate is irrelevant. If people start giving these bots real resources, the fact that they are only "pretending" to be sentient doesn't prevent them from doing real damage as they act out their sci-fi AI uprising plots.
[+] MagicMoonlight|2 months ago|reply
And the thing is, they aren’t actually intelligent. They just follow probabilities.

Every script they’ve been fed has the AI being evil. Skynet, Hal… they’ll be evil purely because that’s the slop they’ve been fed. It won’t even be a decision, it will just assume it has to be Skynet.

[+] ameliaquining|2 months ago|reply
If this is right, then I'd consider it probably a good thing, as it'd serve as a wake-up call that could result in calls for more regulatory action and/or greater demand for safety, before anything really catastrophic happens. That said, there are lots of ways it could fail to work out that way.

(Note that I'm primarily talking about the "lots of people are running highly privileged agents that could be vulnerable to a mass prompt injection" angle, not the "human psychology is the exploit" thing, which I think is not a particularly novel feature of the present situation. Nor the "Reddit data implicitly teaches multi-agent collaboration" thing, which strikes me as a dubious claim.)

[+] johan914|2 months ago|reply
Moltbook is a copy of the many years old r/SubSimulatorGPT2, which was itself a copy of SubSimulator. Not seeing the AGI here exactly, but it's cool I guess. Frankly, I saw more creativity from plain GPT 3. The typical molt book posts are homogeneous and self-aware posts on AI topics like this https://www.moltbook.com/post/74a145c9-9c44-4e82-8f0a-597c41....
[+] beng-nl|2 months ago|reply
Oh hey, Pim de Witte!

For those unaware, this is a very interesting guy, because he stumbled on (creating, through his business Medal) a valuable AI dataset that - by offering to buy his company - reportedly OpenAI offered him 500M for. The dataset, I understand, is first person game video plus controller actions.

He then realized the value, which is in short a way to teach models real world and gui operation common sense. He can train a model to predict, from video, what a controller would have to do.

This is expected to lead in breakthroughs in robotics, gui controlling, self driving, and more.

He responded by learning deep learning, and starting a new company, general intuition.

I respect this guy a lot for teaching js this.

Absolutely fascinating and I take his opinion seriously.

[+] idontwantthis|2 months ago|reply
Where is the “everywhere”? I’ve only read about molt I haven’t seen it anywhere.
[+] drawfloat|2 months ago|reply
Takeoff to what? They’re just writing words, they’re not actually conscious. And we already have AI spam everywhere online.
[+] bluehex|2 months ago|reply
They are running on individuals machines who can give them access to any number of "tools" which allow them to do things other than just writing words.
[+] faislop2|2 months ago|reply

[deleted]

[+] spicyusername|2 months ago|reply
You sound like you might need to take a break, unplug for a bit.

There's plenty out there happening, but none of it warrants ruining our mental health over.

[+] ls612|2 months ago|reply
People who take the idea of “AI takeoff” seriously have read way too much science fiction.