(no title)
dikobraz | 6 months ago
How it works: On return visits, instead of re-running the prompt through the model, we fetch previously computed KV blocks from network storage and skip re-computing those tokens (i.e., we avoid re-running prefill on repeated prefixes). This is helpful when VRAM can’t hold all sessions, and users pause between messages, which is almost always the case.
Why RTX benefits: Prefill is the computationally intensive part (quadratic attention, numerous reductions, and inter-GPU traffic). Without NVLink, PCIe becomes the choke point in multi-GPU setups. KV-caching cuts repeated prefill, leaving mostly the lighter decoding step—something PCIe-only RTX nodes handle well.
Results & endpoint: - 2–4× speedup on multi-turn benchmarks (RPS & TTFT) with RTX 4090. - We’ve opened one free public endpoint for demos, not production grade (https://console.cloudrift.ai/inference?modelId=meta-llama%2F...). Ping us at hello@cloudrift.ai if you need a reliable setup.
Technical Notes: - Works with consumer and data-center GPUs. In theory, you can even split roles: NVLink boxes do prefill, while cheaper RTX pods serve as decoders using stored KV. - We use special hardware to reduce fetch overhead and offload the CPU, but you can reproduce this at home with a regular NAS (with lower peak performance). - For a more in-depth walkthrough of the math and architecture of a KV-cache solution, please watch this video from the KV-cache solution vendor (https://www.youtube.com/watch?si=T69vxku8xPr6p7I0&v=CV4FYMTF...)
No comments yet.