top | item 46614936

(no title)

bildung | 1 month ago

vLLM ususally only plays out its strength when serving multiple users in parallel, in contrast to llama.cpp (Ollama is a wrapper around llama.cpp).

If you want more performance, you could try running llama.cpp directly or use the prebuilt lemonade nightlies.

discuss

order

sofixa|1 month ago

But vLLM was half the t/s of Ollama, so something was obviously not ok.