Even so, I don't think there is any evidence that LLM performance degrades when it is trained on its own output, and there is no intuitive reason it should.
Intuitively, training a simple enough linear statistical model with its own output should be a NOP. But LLMs are anything but simple models, so I think the non-linearities may be synthesizing new useful information. Similarly to how all of maths can be synthesized from a few basic axioms with enough intelligence or computation.
throwanem|3 years ago
jbenjoseph|3 years ago
Intuitively, training a simple enough linear statistical model with its own output should be a NOP. But LLMs are anything but simple models, so I think the non-linearities may be synthesizing new useful information. Similarly to how all of maths can be synthesized from a few basic axioms with enough intelligence or computation.