top | item 43573207

(no title)

reaperman | 11 months ago

Edit: 'wahnfrieden corrected me. I incorrectly posited that CoT was only included in the context window during the reasoning task and later left out entirely. Edited to remove potential misinformation.

discuss

order

wahnfrieden|11 months ago

No, the CoT is not simply extra context the models are specifically trained to use CoT and that includes treating it as unspoken thought

reaperman|11 months ago

Huge thank you for correcting me. Do you have any good resources I could look at to learn how the previous CoT is included in the input tokens and treated differently?

monsieurbanana|11 months ago

In which case the model couldn't possibly know that the number was correct.

Me1000|11 months ago

I'm also confused by that, but it could just be the model being agreeable. I've seen multiple examples posted online though where it's fairly clear that the COT output is not included in subsequent turns. I don't believe Anthropic is public about it (could be wrong), but I know that the Qwen team specifically recommend against including COT tokensfrom previous inferences.