hey! sorry about that, it’s still not perfect but shows that using CoT prompt does improve llm responses. compared with its base model, you can clearly see some difference. If you like, please email me at contact@pixelverse.tech with some prompts you provided that t1 failed to respond correctly and I can take a look.
latexr|1 year ago
A wrong answer is a wrong answer. In one of the questions it failed exactly in the same manner that GPT-4o did when I asked, so it’s not clear at all this is better. I could even see the chain and identify exactly where it made the mistake, but that’s not really a consolation.
hayden_k|1 year ago
hayden_k|1 year ago