(no title)
Zetaphor | 10 days ago
https://github.com/kyuz0/amd-strix-halo-toolboxes
It takes all the work out of it, you just start llama-server in the container context and you're off doing inference without having to figure out dependencies.
Zetaphor | 10 days ago
https://github.com/kyuz0/amd-strix-halo-toolboxes
It takes all the work out of it, you just start llama-server in the container context and you're off doing inference without having to figure out dependencies.
androiddrew|9 days ago