top | item 47009159

(no title)

dnw | 16 days ago

Unclear how much of this is autonomous behavior versus human induced behavior. Two random thoughts: 1) Why can't we put GitHub behind one of those CloudFlare bot detection WAFs 2) Would publishing a "human only" contribution license/code of conduct be a good thing (I understand bot don't have to follow but at least you can point at them).

discuss

order

viraptor|16 days ago

> Why can't we put GitHub behind one of those CloudFlare bot detection WAFs

At small scale of individual cases it's useless. It can block a large network with known characteristics. It's not going to block openclaw driving your personal browser with your login info.

> Would publishing a "human only" contribution license/code of conduct

It would get super muddy with edge cases. What about dependabot? What about someone sending you an automated warning about something important? There's nothing here that is bot specific either - a basic rule for "posting rants rather than useful content will get you banned" would be appropriate for both humans and bots.

We don't really need special statement here.

nickorlow|16 days ago

GitHub/Microsoft would likely prefer that you allow AI contributors and wouldn't want to provide a signal to help filter them out

amarcheschi|16 days ago

Microslop is more and more fitting as time passes

xena|16 days ago

An easy fix for GitHub is to clearly mark which PRs and comments are done via the web vs the API. This will let people at least have some idea.

Wowfunhappy|16 days ago

...but, like, why even offer an API at that point? Now every API-initiated PR is going to be suspect. And this will only work until the bots figure out the internal API or use the website directly.

avaer|16 days ago

Unfortunately enforcing "human behind the push" would break so many automations I don't think it's tenable.

But it would be nice if Github had a feature that would let a human attest to a commit/PR, in a way that couldn't be automated. Like signed commits combined with a captcha or biometric or something (which I realize has its own problems but I can't think of a better way to do this attestation).

Then an Open Source maintainer could just automatically block unattested PRs if they want. And if someone's AI is running rampant at least you could block the person.

QuadmasterXLII|16 days ago

not so clear whether this matters for harm intensity- anything an ai can be induced to do by a human, which it can then do at scale, some human will immediately induce. especially if its harmful.