top | item 44293860

(no title)

sakras | 8 months ago

Typically requests are binned by context length so that they can be batched together. So you might have a 10k bin and a 50k bin and a 500k bin, and then you drop context past 500k. So the costs are fixed per-bin.

discuss

order

daxfohl|8 months ago

Makes sense, and each model has a max context length, so they could charge per token assuming full context by model if they wanted to assume worst case.