> we recommend using SGLang with excess tensor parallelism and EAGLE-3 speculative decoding on live edge Hopper/Blackwell GPUs accessed via low-overhead, prefix-aware HTTP proxies
Sorry to lead with a bunch of jargon! Wanted to make it obvious that we'd give concrete recommendations instead of palaver.
The technical terms there are later explained and diagrammed, and the recommendations derived from something close to first principles (e.g. roofline analysis).
Do you have benchmarks for the SGLang vs vLLM latency and throughput question? Not to challenge your point, but I’d like to reproduce these results and fiddle with the configs a bit, also on different models & hardware combos.
rippeltippel|1 month ago
OCD-driven fix: The correct Latin quote is "Gallia est omnis divisa in partes tres".
charles_irl|1 month ago
ZsoltT|1 month ago
lord
charles_irl|1 month ago
The technical terms there are later explained and diagrammed, and the recommendations derived from something close to first principles (e.g. roofline analysis).
omneity|1 month ago
Do you have benchmarks for the SGLang vs vLLM latency and throughput question? Not to challenge your point, but I’d like to reproduce these results and fiddle with the configs a bit, also on different models & hardware combos.
(happy modal user btw)