top | item 40153725

(no title)

samlhuillier | 1 year ago

I like to think of this like fine-tuning LLMs. When you fine-tune an LLM it doesn't pick up and memorise all of the training data. Rather it adjusts its weights/perspectives based on the content trained on/read.

discuss

order

No comments yet.