zhye's comments

zhye | 2 years ago | on: Punica: Serving multiple LoRA finetuned LLM as one

It will take some effort to implement operators but not too much (cutlass's group gemm already support different mnk's), however the performance benefit is marginal compared to padding all LoRA ranks to the same rank because all these kernels are not compute bound.

zhye | 2 years ago | on: Punica: Serving multiple LoRA finetuned LLM as one

Good question, in general implementing kernels on page tables is tricky in Tensor Compilers because integer set analysis might fail sometimes (but can be fixed with some tweaks). I think using compilers like TVM can help deploy serving systems on different platforms (e.g. AMD GPUs) and I'm optimistic about this direction (and we have to make Tensor Compilers more user-friendly).
page 1