(no title)
jbenjoseph | 3 years ago
Intuitively, training a simple enough linear statistical model with its own output should be a NOP. But LLMs are anything but simple models, so I think the non-linearities may be synthesizing new useful information. Similarly to how all of maths can be synthesized from a few basic axioms with enough intelligence or computation.
No comments yet.