(no title)
blah2244 | 1 year ago
Each token the model outputs requires it to evaluate all of the context it already has (query + existing output). By allowing it more tokens to "reason", you're allowing it to evaluate the context many times over, similar to how a person might turn a problem over in their heads before coming up with an answer. Given the performance of reasoning models on complex tasks, I'm of the opinion that the "more tokens with reasoning prompting" approach is at least a decent model of the process that humans would go through to "reason".
No comments yet.