(no title)
wrcwill | 6 months ago
it seems to truncate your prompt even under the "maximum message length" and yeah around 55k is where it starts to happen.
extremely annoying. o1 pro worked up until 115k or so. both o3 and gpt5 have the issue. (it happens on all models for me not just the pro variations)
with the new 400k context length in api i would expect atleast 128k message lengths and maybe 200k context in chat.
energy123|6 months ago
I'm putting the highest quality context into the 50k tokens, and attaching the rest for RAG. But maybe there is a better way.
wrcwill|6 months ago