ChatGPT API – How do I handle the 4000 token limit for keeping context?
3 points| johnnyyyy | 3 years ago
The simplest solution of course would be to just delete the oldest messages but that of course could delete more important messages.
Another idea was that I could let ChatGPT summarise the conversation when I am close to the limit and then only save that message.
Maybe theres already some better implementation that I am not aware of?
bob1029|3 years ago
https://platform.openai.com/docs/guides/fine-tuning
It can be cheaper to use a more expensive model (in terms of tokens/$) if it can be tuned with the same context you'd have to otherwise send with every prompt.
Right now, gpt-3.5-turbo cannot be tuned, so you must fit everything in the 4k limit. There are 4 other models that can be tuned.
OhNoNotAgain_99|3 years ago
[deleted]