(no title)
nrmitchi | 6 months ago
Yes, there is a huge problem with AI content flooding the field, and being able to identify/exclude it would be nice (for a variety of purposes)
However, the issue isn't that content was "AI generated"; as long as the content is correct, and is what the user was looking for, they don't really care.
The issue is content that was generated en-masse, is largely not correct/trustworthy, and serves only to to game SEO/clicks/screentime/etc.
A system where the content you are actually trying to avoid has to opt in is doomed for failure. Is the purpose/expectation here that search/cdn companies attempt to classify, and identify, "AI content"?
TylerE|6 months ago
edoceo|6 months ago
https://www.ietf.org/rfc/rfc3514.txt
Note date published
yahoozoo|6 months ago
TrueDuality|6 months ago
nrmitchi|6 months ago
The current approach is that the content served is the same for humans and agents (ie, a site serves consistent content regardless of the client), so who a specific header is "meant for" is a moot point here.
nikolayasdf123|6 months ago