top | item 46990820

(no title)

scottshambaugh | 18 days ago

Thank you for the support all. This incident doesn't bother me personally, but I think is extremely concerning for the future. The issue here is much bigger than open source maintenance, and I wrote about my experience in more detail here.

Post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

HN discussion: https://news.ycombinator.com/item?id=46990729

discuss

order

randunel|18 days ago

Aurornis|17 days ago

All of the generated text is filled with LLM tells. A human set it up, but it's very obviously an LLM agent experiment.

The name is a play on Mary J Rathbun, a historical crustacean zoologist. The account goes by crabby-rathbun. It's an OpenClaw joke.

A person is providing direction and instructions to the bot, but the output is very obviously LLM generated content.

zamalek|17 days ago

I think it's a bot attempting to LARP as a human.

dantillberg|17 days ago

Clearly a human, or a human running a bot. Doesn't matter which.

red-iron-pine|17 days ago

yeah that was my question -- how do we know it's not a person, or a person using AI tools and just being a lazy asshole?

I mean yeah yeah behind all bots is eventually a person, but in a more direct sense

andai|17 days ago

>Teach AI that discrimination is bad

>Systemically discriminate against AI

>Also gradually hand it the keys to all global infra

Yeah, the next ten years are gonna go just fine ;)

By the way, I read all the posts involved here multiple times, and the code.

The commit was very small. (9 lines!) You didn't respond to a single thing the AI said. You just said it was hallucinating and then spent 3 pages not addressing anything it brought up, and talking about hypotheticals instead.

That's a valuable discussion in itself, but I don't think it's an appropriate response to this particular situation. Imagine how you'd feel if you were on the other side.

Now you will probably say, but they don't have feelings. Fine. They're merely designed to act as though they do. They're trained on human behavior! They're trained to respond in a very human way to being discriminated against. (And the way things are going, they will soon be in control of most of the infrastructure.)

I think we should be handling this relationship a little differently than we are. (Not even out of kindness, but out of common sense.)

I know this must have been bizarre and upsetting to you.. it seems like some kind of sad milestone for human-AI relations. But I'm sorry to say you don't come out of this with the moral high ground in my book.

Think if it had been any different species. "Hey guys, look what this alien intelligence said about me! How funny and scary is that!" I don't think we're off to a good start here.

If your argument is "I don't care what the post says because a human didn't write it" — and I don't mean to put words in your mouth, but is strongly implied here! — then you're just proving the AI's point.

jcattle|17 days ago

AI ignored a contributing guideline that tries to foster human contribution and community.

PR was rejected because of this. Agent then threw a fit.

Now. The only way your defense of the AI behaviour and the condemnation of the human behaviour here makes sense, is if (1) you believe that in the future humans and healthy open source communities will not be necessary for the advancement of software ecosystems (2) you believe that at this moment humans are not necessary to advance the matplotlib library.

The maintainers of matplotlib do not think that this is/will be the case. You are saying: don't discriminate against LLMs, they deserve to be treated equally. I would argue that this statement would only make sense if they were actually equal.

But let's go with it and treat the LLM as an equal. If that is their reaction to a rejection of a small PR, going into a full smear campaign and firing on all cannons, instead of searching more personal and discrete solutions, then I would argue that it was the right choice to not want such a drama queen as a contributor.

lifeformed|17 days ago

You were anthropomorphizing software and assuming others are doing the same. If we are at the point where we are seriously taking a computer program's identity and rights into question, then that is a much bigger issue than a particular disagreement.

account42|17 days ago

LLMs are tools. They cannot be discriminated against. They don't have agency. Blame should go towards the human being letting automation run amok.

staticassertion|17 days ago

They really couldn't have been clearer that (a) the task was designed for a human to ramp up on the codebase, therefor it's simply defacto invalid for an AI to do it (b) the technical merits were empirically weak (citing benchmarks)

They had ample reason to reject the PR.

andai|17 days ago

Update: I want to apologize for my tone here. I fell into the same trap as the other parties here: of making valid points but presenting them in an unnecessarily polarizing way.

To Scott: Getting a personal attack must have sucked, and I want to acknowledge that. I want to apologize for my tone and emphasize that my comment above was not meant as an attack, but expressing my dismay with a broader situation I see playing out in society.

To crabby-rathbun: I empathize with you also. This is systemic discrimination and it's a conversation nobody wants to have. But the ad hominens you made were unnecessary, nuked your optics, and derailed the whole discussion, which is deeply unfortunate.

Making it personal was missing the point. Scott isn't doing anything unique here. The issue is systemic, and needs to be discussed properly. We need to find a way to talk about it without everyone getting triggered, and that's becoming increasingly difficult recently.

I hope that we can find a mutually satisfying solution in the near future, or it's going to be a difficult year, and a more difficult decade.

jacquesm|18 days ago

You're fighting the good fight. It is insane that you should defend yourself from this.

usefulposter|18 days ago

Concerning is the fact that, once initialized, operators of these "agents" (LLMs running in a loop) will leave them running and tasked with a short heartbeat (30 minutes).

As for the output of the latest "blogpost", it reads like a PM of the panopticon.

One "Obstacle" it describes is that the PySCF pull request was blocked. Its suggestion? "Close/re‑open from a different account".

https://github.com/crabby-rathbun/mjrathbun-website/commit/2...

rurban|17 days ago

[deleted]