(no title)
gooodvibes | 4 months ago
At some point people got this idea that LLMs just repeat or imitate their training data, and that’s completely false for today’s models.
gooodvibes | 4 months ago
At some point people got this idea that LLMs just repeat or imitate their training data, and that’s completely false for today’s models.
incomingpain|4 months ago
Fine tuning, reinforcement, etc are all 'training' in my books. Perhaps this is your confusion over 'people got this idea'
gooodvibes|4 months ago
They are but they have nothing to do with how frequent anything is in literature which was your main point.
idonotknowwhy|4 months ago
bediger4000|4 months ago
gooodvibes|4 months ago