This pricing is probably more expensive than gpt-3.5-turbo 4k context. A large prompt for the API would be 1k tokens in and 1k tokens out, which comes to $0.0035 for OpenAI. Your website says to expect a request to take 4 seconds minimum, so that's $0.004. Given how light Mistral is, I think you'd have to cut your price by at least a factor of 10 for it to be reasonable.
matteoraso|2 years ago
This pricing is probably more expensive than gpt-3.5-turbo 4k context. A large prompt for the API would be 1k tokens in and 1k tokens out, which comes to $0.0035 for OpenAI. Your website says to expect a request to take 4 seconds minimum, so that's $0.004. Given how light Mistral is, I think you'd have to cut your price by at least a factor of 10 for it to be reasonable.
bazmattaz|2 years ago