top | item 43194099

(no title)

UnlockedSecrets | 1 year ago

You can ask, and it will be made up not grounded in reality

discuss

order

j_bum|1 year ago

Sure, but I’m curious if it would serve to provide some self-regulation.

E.g., all of this “thinking” trend that’s happening. It would be interesting if the model does a first pass, scored its individual outputs, then reviews its scores and censors/flags scores that are low.

I know it’s all “made up”, but generally I have a lot of success asking the model to give 0-1 ratings on confidence for its answers, especially for new niche questions that are likely out of the training set.

rafram|1 year ago

It doesn’t. Asking for confidence doesn’t prompt it to make multiple passes, and there’s no real concept of “passes” when you’re talking about non-reasoning models. The model takes in text and image tokens and spits out the text tokens that logically follow them. You can try asking it to think step by step, or you can use a reasoning model that essentially bakes that behavior into the training data, but I haven’t found that to be very useful for OCR tasks. If the encoded version of your image doesn’t resolve to text in the model’s latent space, it never will, no matter how much the model “reasons” (spits out intermediate text tokens) before giving a final answer.