top | item 46564507

(no title)

tom_0 | 1 month ago

GGML still runs on llama.cpp, and that still requires CUDA to be installed, unfortunately. I saw a PR for DirectML, but I'm not really holding my breath.

discuss

order

lostmsu|1 month ago

You don't have to install the whole CUDA. They have a redistributable.

tom_0|1 month ago

Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks.