top | item 45358093

(no title)

te0006 | 5 months ago

Interesting - do you need to take any special measures to get OSS genAI models to work on this architecture? Can you use inference engines like Ollama and vLLM off-the-shelf (as Docker containers) there, with just the Radeon 8060S GPU? What token rates do you achieve?

(edit: corrected mistake w.r.t. the system's GPU)

discuss

order

buyucu|5 months ago

I just use llama.cpp. It worked out of the box.