These jokers seem like the AI version of "script kiddie" hackers, and OpenAI may be engaging in a bit of humble bragging. It doesn't take considerable investments in time or money to run local LLMs, INCLUDING ChatGPT, where your questions, prompts, and results are not sent home to the mothership, so it's a BS article as to (the real) actors who may or may not be doing this. NOW, if OpenAI or Gemini or LLama, etc, showed how they analyzed social media posts and flagged the ones that were AI generated and the analysis as to WHY the article is flagged, then that would be much more useful, actionable by at least some of the readers and would put the accounts spreading the content (particularly the rebroadcast fluffers) in the spotlight.
Arainach|1 year ago
It's like claiming a search engine open sourcing its ranking algorithm would help people be informed instead of making spammers able to perfectly hijack all the results.
navaed01|1 year ago