(no title)
rsiqueira | 2 years ago
It's 8x more expensive, indeed. I'm comparing with my use case, the standard gpt-3.5 API, where my users consume 4k input tokens (due to context plus chat history) and almost 1k output tokens.
rsiqueira | 2 years ago
It's 8x more expensive, indeed. I'm comparing with my use case, the standard gpt-3.5 API, where my users consume 4k input tokens (due to context plus chat history) and almost 1k output tokens.
No comments yet.