In theory, but I think the problem you'd find in practice is that reviewing 100s of nearly-identical AI-written articles gets very boring very quickly, and a lot of errors would slip through.
Ok, so 2 is not enough. What if you put 4 humans? Or 8? Or maybe even 20 (instead of 100, per OP)?
These experiments fail because they greedily try to outsource all the work to AI. Another recent example: the lawyer who submitted ChapGPT hallucinations directly to a court case.
You don't need to eliminate humans, and certainly not at first. You just need to be much more efficient than status quo in order for AI to be deployed at scale.
Why? Lots of people do a lot of reading for a living. Reading a new article is way more interesting than thousands of jobs I can think of. Data analysts and accountants literally pour though millions of featureless numbers over their career, why would this be any different?
It's like trying to find typos in your own writing. It's very difficult to stay focused when reading a long series of nearly identical documents. The people you're talking about are reading a lot of novel documents.
guiambros|2 years ago
These experiments fail because they greedily try to outsource all the work to AI. Another recent example: the lawyer who submitted ChapGPT hallucinations directly to a court case.
You don't need to eliminate humans, and certainly not at first. You just need to be much more efficient than status quo in order for AI to be deployed at scale.
soligern|2 years ago
worrycue|2 years ago
The numbers are probably meaningful to them due to their education though.
It’s like saying programmers spend their whole career pour though millions of lines of nigh random symbols they call source code.
maxbond|2 years ago