(no title)
sherlockxu | 7 months ago
We created this handbook to make LLM inference concepts more accessible, especially for developers building real-world LLM applications. The goal is to pull together scattered knowledge into something clear, practical, and easy to build on.
We’re continuing to improve it, so feedback is very welcome!
GitHub repo: https://github.com/bentoml/llm-inference-in-production
DiabloD3|7 months ago
leopoldj|7 months ago
criemen|7 months ago
I have a question. In https://github.com/bentoml/llm-inference-in-production/blob/..., you have a single picture that defines TTFT and ITL. That does not match my understanding (but you guys know probably more than me): In the graphic, it looks like that the model is generating 4 tokens T0 to T3, before outputting a single output token.
I'd have expected that picture for ITL (except that then the labeling of the last box is off), but for TTFT, I'd have expected that there's only a single token T0 from the decode step, that then immediately is handed to detokenization and arrives as first output token (if we assume a streaming setup, otherwise measuring TTFT makes little sense).
sherlockxu|7 months ago
armcat|7 months ago
sethherr|7 months ago
At the very least, the sections should be a single page each.