I don't usually see responses fail. But what I did see shortly after the GPT-5 release (when servers were likely overloaded) was the model "thinking" for over 8 minutes. It seems like (if you manually select the model) you're simply getting throttled (or put in a queue).
aniviacat|6 months ago
addaon|6 months ago
I should think about whether my experience generalizes.
The user seems to have had a different experience.
Stopped reasoning.