(no title)
mathewsanders | 5 months ago
I’m presuming that one class of junk/low quality output is when the model doesn’t have high probability next tokens and works with whatever poor options it has.
Maybe low probability tokens that cross some threshold could have a visual treatment to give feedback the same way word processors give feedback in a spelling or grammatical error.
But maybe I’m making a mistake thinking that token probability is related to the accuracy of output?
StilesCrisis|5 months ago
never_inline|5 months ago
Isn't that what logprobs is?