(no title)
gs17 | 2 days ago
Not recognizing they were outputting wrongthink until after it was being streamed to the user is a known behavior with some Chinese chatbot apps. A quick search found an example of DeepSeek doing it: https://www.reddit.com/r/OpenAI/comments/1ic3kl6/deepseek_ce...
I don't think his story is genuine, but it showing the "wrong" answer before correcting itself is known behavior.
EDIT: Here's an example of it outputting a full response about Taiwan specifically before removing it: https://www.reddit.com/r/interestingasfuck/comments/1i7ceol/...
recursive|2 days ago