top | item 42810054

(no title)

yding | 1 year ago

Thanks Simon. I think this might solve one of the most common questions people ask me: how do I get Perplexity-like inline citations on my LLM output?

This looks like model fine tuning rather than after the fact pseudo justification. Do you agree?

discuss

order

simonw|1 year ago

Yeah, I think they fine-tuned their model to be better at the pattern where you output citations that reference exact strings from the input. Previously that's been a prompting trick, e.g. here: https://mattyyeung.github.io/deterministic-quoting

yding|1 year ago

Makes sense. I wonder if it affects the model output performance (sans quotes), as I could imagine that splitting up the model output to add the quotes could cause it to lose attention on what it was saying.