top | item 34890185

(no title)

jbenjoseph | 3 years ago

But even so, the human picks the prompts and only publishes the AI outputs they think read nicely. There is information gain even in that.

discuss

order

throwanem|3 years ago

At the moment that's probably true, but is it guaranteed to remain so?

jbenjoseph|3 years ago

Even so, I don't think there is any evidence that LLM performance degrades when it is trained on its own output, and there is no intuitive reason it should.