top | item 34920933

(no title)

jbenjoseph | 3 years ago

I have seen no evidence for that, only the opposite: https://arxiv.org/abs/2210.11610

Intuitively, training a simple enough linear statistical model with its own output should be a NOP. But LLMs are anything but simple models, so I think the non-linearities may be synthesizing new useful information. Similarly to how all of maths can be synthesized from a few basic axioms with enough intelligence or computation.

discuss

order

No comments yet.