Reading the related GitHub issues the dev seems to just not understand HTTP or web crawling etiquette before you get into the “actually AI is good for creators” pitches. The damage is probably done - even if this gets fixed, unethical people building datasets will just use the old versions.
Seems pretty clear that it's meant to be malicious compliance with consent, with consent being automatically assumed unless you say no to this specific scrapper, as though there were even a reasonable chance millions of sites could possibly know about the exact tag.
lexlash|2 years ago
unknown|2 years ago
[deleted]
edent|2 years ago
His contention is that denying content to AI tools deprives people of their right to better AI tools...
kordlessagain|2 years ago
If anything picks up a URL and uses it later, that is definitely a web crawler.
onepointsixC|2 years ago
beaviskhan|2 years ago
sharemywin|2 years ago
sharemywin|2 years ago