top | item 45004260

(no title)

GrantMoyer | 6 months ago

Is there any reason to explicitly train for role reversal? Can't you instead swap the input labels on any instruct tuned LLM? The model is trained on both sides of the chat log either way, right?

discuss

order

Tostino|6 months ago

No. Most of the time loss is only calculated on the model response tokens, not the user input tokens.