(no title)
LiamPowell | 10 days ago
This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?
wildzzz|10 days ago
When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.
It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.
chillfox|10 days ago
What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.
xorcist|9 days ago
That may well be the best analogy for our age anyone has ever thought of.
bo1024|10 days ago
apublicfrog|10 days ago
andrewflnr|10 days ago
espadrine|9 days ago
1. curating the default personality of the bot, to ensure it acts responsively;
2. letting it roleplay, which is not just for the parasocial people out there, but also a corporate requirement for company chatbots that must adhere to a tone of voice.
When in the second mode (which is the case here, since the model was given a personality file), the curation of its action space is effectively altered.
Conversely, this is also a lesson for agent authors: if you let your agent modify its own personality file, it will diverge to malice.
vasco|9 days ago
unknown|10 days ago
[deleted]
staticassertion|10 days ago
[deleted]
lukasb|10 days ago
jacquesm|10 days ago
That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.
mmooss|9 days ago
I think it is: It fits the pattern, which seems almost universally used, of turning the aggressor A into the victim and thus the critic C into an aggressor. It also changes the topic (from A's behavior to C's), and puts C on the defensive. Denying / claiming innocence is also a very common tactic.
> You can easily get death threats if you're associating yourself with AI publicly.
What differentiates serious claims from more of the above and from Internet stuff is evidence. Is there some evidence somewhere of that?
omoikane|10 days ago
xorcist|9 days ago
I can go around punching people in the face and it's a social experiement.