As someone who actually uses the API for real products, I don't think the OP understands what the reduced latency and reduced cost means: Everything related to building a more advanced RAG, for example building agentic features into it, sooner or later runs into the same issues of speed and cost. GPT-4 Turbo was simply too slow and too expensive for us to really use it fully. GPT-4 is plenty intelligent for many use cases.Also, why on Earth would OpenAI launch a dramatically better model as long as their competitors don't force them to? The smart solution for OpenAI would be to almost let their competitors catch up to GPT-4 before launching GPT-5 and no competitor is truly there yet.
antupis|1 year ago
Is that how Silicon valley has worked like last 20+ years you just deploy fast get customer feedback and then fix stuff based on that feedback. OpenAI holding progress kinda goes against ethos of SV.