the cost would be depending on GPU type/serving system/traffic pattern. check out some throughput comparison from vllm's blog post https://vllm.ai/
if you serve 7B on cost-optimized GPUs (A10G/L4) and keep it busy, it can be a lot cheaper than gpt3.5 turbo. tho it's not a fair comparison as 3.5's quality is still far better.
zhwu|2 years ago
Just want to add about hosting your own LLM vs using ChatGPT. Cost is definitely a thing to consider, but it also depends on whether it is ok to share the requests to your product with OpenAI.
Also, something you cannot do with ChatGPT is to custom it with your own data, such as internal documents, etc. As shown in the blog, the model trained by ourselves can easily know its identity.
weichiang|2 years ago
npsomaratna|2 years ago
unknown|2 years ago
[deleted]