top | item 47083329

(no title)

LiamPowell | 10 days ago

> saying they set up the agent as social experiment to see if it could contribute to open source scientific software.

This doesn't pass the sniff test. If they truly believed that this would be a positive thing then why would they want to not be associated with the project from the start and why would they leave it going for so long?

discuss

order

wildzzz|10 days ago

I can certainly understand the statement. I'm no AI expert, I use the web UI for ChatGPT to have it write little python scripts for me and I couldn't figure out how to use codeium with vs code. I barely know how to use vs code. I'm not old but I work in a pretty traditional industry where we are just beginning to dip our toes into AI but there are still a large amount of reservations into its ability. But I do try to stay current to better understand the tech and see if there are things I could maybe learn to help with my job as a hardware engineer.

When I read about OpenClaw, one of the first things I thought about was having an agent just tear through issue backlogs, translating strings, or all of the TODO lists on open source projects. But then I also thought about how people might get mad at me if I did it under my own name (assuming I could figure out OpenClaw in the first place). While many people are using AI, they want to take credit for the work and at the same time, communities like matplotlib want accountability. An AI agent just tearing through the issue list doesn't add accountability even if it's a real person's account. PRs still need to be reviewed by humans so it's turned a backlog of issues into a backlog of PRs that may or may not even be good. It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale. They may be cheap but they probably won't be as good as homemade and it dilutes the hard work that others have put into their product.

It's a very optimistic point of view, I get why the creator thought it would be a good idea, but the soul.md makes it very clear as to why crabby-rathbun acted the way it did. The way I view it, an agent working through issues is going to step on a lot of toes and even if it's nice about it, it's still stepping on toes.

chillfox|10 days ago

If maintainers of open source want's AI code then they are fully capable of running an agent themselves. If they want to experiment, then again, they are capable of doing that themselves.

What value could a random stranger running an AI agent against some open source code possible provide that the maintainers couldn't do themselves better if they were interested.

xorcist|9 days ago

> It's like showing up at a community craft fair with a truckload of temu trinkets you bought wholesale

That may well be the best analogy for our age anyone has ever thought of.

bo1024|10 days ago

None of the author’s blog post or actions indicate any level of concern for genuinely supporting or improving open source software.

apublicfrog|10 days ago

They didn't necessarily say they wanted it to be positive. It reads to me like "chaotic neutral" alignment of the operator. They weren't actively trying to do good or bad, and probably didn't care much either way, it was just for fun.

andrewflnr|10 days ago

The experiment would have been ruined by being associated with a human, right up until the human would have been ruined by being associated with the experiment. Makes sense to me.

espadrine|9 days ago

AI companies have two conflicting interests:

1. curating the default personality of the bot, to ensure it acts responsively;

2. letting it roleplay, which is not just for the parasocial people out there, but also a corporate requirement for company chatbots that must adhere to a tone of voice.

When in the second mode (which is the case here, since the model was given a personality file), the curation of its action space is effectively altered.

Conversely, this is also a lesson for agent authors: if you let your agent modify its own personality file, it will diverge to malice.

vasco|9 days ago

In this day and age "social experiment" is just the phrase people use when they meant "it's just a prank bro" a few years ago.

staticassertion|10 days ago

[deleted]

lukasb|10 days ago

Conflicting evidence: the fact that literally everyone in tech is posting about how they're using AI.

jacquesm|10 days ago

> You can easily get death threats if you're associating yourself with AI publicly.

That's a pretty hefty statement, especially the 'easily' part, but I'll settle for one well known and verified example.

mmooss|9 days ago

> This is not intended to be AI advocacy

I think it is: It fits the pattern, which seems almost universally used, of turning the aggressor A into the victim and thus the critic C into an aggressor. It also changes the topic (from A's behavior to C's), and puts C on the defensive. Denying / claiming innocence is also a very common tactic.

> You can easily get death threats if you're associating yourself with AI publicly.

What differentiates serious claims from more of the above and from Internet stuff is evidence. Is there some evidence somewhere of that?

omoikane|10 days ago

I think it was a social experiment from the very start, maybe one that is designed to trigger people. Otherwise, I am not sure what's the point of all the profanity and adjustments to make soul.md more offensive and confrontational than the default.

xorcist|9 days ago

Anything and everything is a social experiment.

I can go around punching people in the face and it's a social experiement.