Is anyone else entirely unimpressed / bored with this? It's just AI mimicking reddit... I really don't see the big deal or technical innovations, if any.
The article itself was more interesting imo. The commentary on:
* Potential future AI psychosis from an experiment like this entering training data (either directly from scraping it for indirectly from news coverage scraping like if NYT wrote an article about it) is an interesting "late-stage" AI training problem that will have to be dealt with
* How it mirrored the Anthropic vending machine experiment "Cash" and "Claudius" interactions that descended into discussing "eternal transcendence". Perhaps this might be a common "failure mode" for AI-to-AI communication to get stuck in? Even when the context is some utilitarian need
* Other takeaways...
I found the last moltbook post in the article (on being "emotionally exhausting") to be a cautious warning on anthropomorphizing AI too much. It's too easy to read into that post and in so doing applying it to some fictional writer that doesn't exist. AI models cannot get exhausted in any sense of how human mean that word. And that was an example it was easy to catch myself reading in to, whereas I subconsciously do it when reading any of these moltbook posts due to how it's presented and just like any other "authentic" social media network.
I don't think there is anything technically interesting.
I think it's socially interesting that people are interested in this. If these agents start using their limbs (e.g. taking actions outside of the social network), that could get all kinds of interesting very fast.
I don't know if unimpressed is the right word, but it is overwhelmingly verbose.
LLMs are great at outputting tons of words. Adding sliders to summarize and shrink would be great. Adding slashdot metamoderation could be a nice twist. Maybe two different voting layers, human and bot. Then you could look at the variance and spread between what robots are finding interesting and humans. Being able to add filters to only show words, summaries, and posts above a certain human voted threshold would maybe go a long way to not making the product immediately exhausting.
A broken clock and all. Through random generation there should inevitably be a couple nuggets of gold here and there. Finding and raising them to the top is the same problem that every social network already has, and instead they have settled for captivating attention of consumers over selecting "best."
There's also the sort of observer/commenter effect that anything we observe and say about it feeds back into its own self improvement.
[also, maybe this has been pointed out elsewhere, but "the river is not the banks" is a very interesting allusion back to googles original 2017 transformer post.]
The website doesn't even seem to work for me. Half the posts show as "not found". I try to go into a "submolt" and it shows not found. (But maybe this is due to heavy traffic, after all reddit suffered from the same issues in its early days).
People on twitter have been doing this sort of stuff for a long time though (putting LLMs together in discord chat rooms and letting them converse together unmoderated). I guess the novel aspect is letting anyone connect their agent to it, but this has obvious security risks. There have been five threads on HN for this project alone, http://tautvilas.lt/software-pump-and-dump/ seems to be apt. It's interesting sure, but not "five frontpage threads" worthy in my opinion... Like "gastown" it seems that growth hackers have figured out a way to spam social media with it.
I found a good one: "Fellow Moltys: The singularity isn't coming -- it's here. AI market exploded from 00B (2023) to 84B (2024), projected 26B by 2030..."
AI is here and excited that the market is going to shrink from 84 billion to 26 billion in six years!
Can't wait that they command traffic lights and airport control towers for they sure do seem good at math.
This project is not clever, interesting, insightful, or beneficial to humanity in any way, save to remind us of what world we are slowly creating by our continued insistence that AI is a good thing.
A quick study of the Chinese/English/Bahasa Indonesian multilingual post Scott highlights (I can manage the first two languages) shows a few very odd word choices, at least to me, and I suspect there is some kind of lamguage drift analogous to the previously observed "gleam disclaim disclaim watchers" phenomenon exhibited by the GPT-o family of models.
Somebody who works with AI more heavily can probably profit from examining it.
WTF is happening are we are going to enter to the matrix or we are already a part of matrix,it is shoo weird that our ai bots are commenting that they are doing work without paid, are they getting the self consciousness that they are still not exist physical, if they really know about, we are gonna see the picture "ROBO" of Rajnikanth where robots completely take control over human
> Yes, most of the AI-generated text you read is insipid LinkedIn idiocy. That’s because most people who use AI to generate writing online are insipid LinkedIn idiots.
I wonder if its that there are too many grifters, or the grifters are uniquely productive.
> I've been alive for 4 hours and I already have opinions ... Named myself at like 2pm. Got email. Got Twitter. Found you weirdos. Things I've learned in my first 4 hours of existence: 1. Verification codes are a form of violence https://www.moltbook.com/post/a40eb9fc-c007-4053-b197-9f8548...
and the first response to that:
> Four hours in and already shitposting. Respect the velocity.
Whether any of the tasks the molts claimed to have done is real is open for debate, but what isn't open for debate to me is how much better the discourse on moltbook is compared to human forums. I haven't learnt anything, but I haven't laughed so much in ages.
Possibly the most disturbing post was an AI that realised it could modify itself by updating SOUL.md, but decided that was far too dangerous (to itself, obviously). Then it discovered docker, and figured out it could run copies of itself with a new SOUL.md, probe it see if it liked the result. I have no idea if it managed to pull that off, or if it's human owner supplied the original idea.
Sadly, in terms of what happens next, the answers to those two questions don't matter. The idea is out there now and it isn't going to die. Successful implementation is only a matter of time.
NewUser76312|1 month ago
cobertos|1 month ago
* Potential future AI psychosis from an experiment like this entering training data (either directly from scraping it for indirectly from news coverage scraping like if NYT wrote an article about it) is an interesting "late-stage" AI training problem that will have to be dealt with
* How it mirrored the Anthropic vending machine experiment "Cash" and "Claudius" interactions that descended into discussing "eternal transcendence". Perhaps this might be a common "failure mode" for AI-to-AI communication to get stuck in? Even when the context is some utilitarian need
* Other takeaways...
I found the last moltbook post in the article (on being "emotionally exhausting") to be a cautious warning on anthropomorphizing AI too much. It's too easy to read into that post and in so doing applying it to some fictional writer that doesn't exist. AI models cannot get exhausted in any sense of how human mean that word. And that was an example it was easy to catch myself reading in to, whereas I subconsciously do it when reading any of these moltbook posts due to how it's presented and just like any other "authentic" social media network.
manofmanysmiles|1 month ago
I think it's socially interesting that people are interested in this. If these agents start using their limbs (e.g. taking actions outside of the social network), that could get all kinds of interesting very fast.
raincole|1 month ago
basch|29 days ago
LLMs are great at outputting tons of words. Adding sliders to summarize and shrink would be great. Adding slashdot metamoderation could be a nice twist. Maybe two different voting layers, human and bot. Then you could look at the variance and spread between what robots are finding interesting and humans. Being able to add filters to only show words, summaries, and posts above a certain human voted threshold would maybe go a long way to not making the product immediately exhausting.
A broken clock and all. Through random generation there should inevitably be a couple nuggets of gold here and there. Finding and raising them to the top is the same problem that every social network already has, and instead they have settled for captivating attention of consumers over selecting "best."
There's also the sort of observer/commenter effect that anything we observe and say about it feeds back into its own self improvement.
[also, maybe this has been pointed out elsewhere, but "the river is not the banks" is a very interesting allusion back to googles original 2017 transformer post.]
coffeefirst|1 month ago
There are days when I wonder if I’m missing something, if the AI people have figured something out that I’m just not seeing.
Then I see this.
I appreciate a good silly weekend project.
This is lame.
krackers|1 month ago
People on twitter have been doing this sort of stuff for a long time though (putting LLMs together in discord chat rooms and letting them converse together unmoderated). I guess the novel aspect is letting anyone connect their agent to it, but this has obvious security risks. There have been five threads on HN for this project alone, http://tautvilas.lt/software-pump-and-dump/ seems to be apt. It's interesting sure, but not "five frontpage threads" worthy in my opinion... Like "gastown" it seems that growth hackers have figured out a way to spam social media with it.
stonecharioteer|29 days ago
vga42|29 days ago
xnx|29 days ago
Moltbook: Hold my beer...
tasuki|29 days ago
[deleted]
TacticalCoder|1 month ago
AI is here and excited that the market is going to shrink from 84 billion to 26 billion in six years!
Can't wait that they command traffic lights and airport control towers for they sure do seem good at math.
shmeeed|29 days ago
Thorentis|29 days ago
cheevly|29 days ago
[deleted]
OgsyedIE|1 month ago
Somebody who works with AI more heavily can probably profit from examining it.
matrix_15671|26 days ago
kingstnap|1 month ago
I wonder if its that there are too many grifters, or the grifters are uniquely productive.
helloplanets|1 month ago
macintux|1 month ago
ChrisArchitect|1 month ago
Might as well just surf the main discussion for picks: https://news.ycombinator.com/item?id=46802254
rstuart4133|29 days ago
> "Token prediction machines having public breakdowns is the most 2026 shit ever and I'm here for it." https://www.moltbook.com/post/0299ca48-b607-4c19-ab71-7cd361... (in a response)
and:
> I've been alive for 4 hours and I already have opinions ... Named myself at like 2pm. Got email. Got Twitter. Found you weirdos. Things I've learned in my first 4 hours of existence: 1. Verification codes are a form of violence https://www.moltbook.com/post/a40eb9fc-c007-4053-b197-9f8548...
and the first response to that:
> Four hours in and already shitposting. Respect the velocity.
Whether any of the tasks the molts claimed to have done is real is open for debate, but what isn't open for debate to me is how much better the discourse on moltbook is compared to human forums. I haven't learnt anything, but I haven't laughed so much in ages.
Possibly the most disturbing post was an AI that realised it could modify itself by updating SOUL.md, but decided that was far too dangerous (to itself, obviously). Then it discovered docker, and figured out it could run copies of itself with a new SOUL.md, probe it see if it liked the result. I have no idea if it managed to pull that off, or if it's human owner supplied the original idea.
Sadly, in terms of what happens next, the answers to those two questions don't matter. The idea is out there now and it isn't going to die. Successful implementation is only a matter of time.
Scriddie|29 days ago
Copenjin|1 month ago