top | item 41723188

(no title)

khurdula | 1 year ago

Are we supposed to use AMD GPUs for this to work? Or Does it work on any GPU?

discuss

order

karamanolev|1 year ago

> This project provides a Docker-based inference engine for running Large Language Models (LLMs) on AMD GPUs.

First sentence of the README in the repo. Was it somehow unclear?