(no title)
cheeselip420 | 2 years ago
We know that transformers can generalize within the training set. We know that transformers can make connections between wildly different domains (at least when prompted).
Of course it can't generalize beyond training - why would it? But at the same time, there is probably huge amounts of value lurking INSIDE the training data that humans haven't unlocked yet.
bayindirh|2 years ago
> Of course it can't generalize beyond training - why would it?
4 out of 5 people I discussed this subject didn't know, and even believe that current LLMs are bound within their training set. They claimed that LLMs could synthesize data beyond their training set, and the resulting answers will never be wrong.
There's a large misunderstanding about how these things work, and LLM developers do not spend the effort to fix this misunderstanding since it helps to raise the hype even further.
Closi|2 years ago
Of course they can't create new facts, other than in principle, ones that can be derived from the training data.
quickcheque|2 years ago