gpt-5 doesn't "think" anything, and all LLMs routinely return incorrect things in their output, which is why you're meant to have a human who knows what they're doing review it before doing anything with it/wasting anyone else's time with it/drinking that bottle of stuff you found under the sink.
Shank|6 months ago
It’s interesting to see this type of redaction as plan text in the shared document (at least on mobile it’s indistinguishable from the response text).
unknown|6 months ago
[deleted]
bananapub|6 months ago