top | item 43203463

(no title)

chad1n | 1 year ago

I don't think that's the case, when a model is reasoning, it sometimes starts gaslighting itself and "solving" other problems completely than the one you've shown. Reasoning can help "in general", but very frequently, reasoning also makes it more "nondetermistic". Without reasoning, usually it ends up just writing some code from its training data, but with reasoning, it can end up hallucinating hard. Yesterday, I asked Claude thinking to solve me a problem in c++ and it showed the result in python.

discuss

order

No comments yet.