top | item 44157469

(no title)

nahnahno | 9 months ago

LoRA and QLoRA are still fine tuning I thought? Just updating a subset of parameters. You are still training a base model that was pre-trained (and possibly fine tuned after).

discuss

order

No comments yet.