top | item 44846951

(no title)

baggiponte | 6 months ago

Yeah. The docs tell you that you should build it yourself, but…

discuss

order

tough|6 months ago

but unlike cuda there's no custom kernels for inference in vllm repo...

I think