top | item 45573115

(no title)

sieve | 4 months ago

If you want to train/sample large models, then use what the rest of the industry uses.

My use case is different. I want something that I can run quickly on one GPU without worrying about whether it is supported or not.

I am interested in convenience, not in squeezing out the last bit of performance from a card.

discuss

order

danielmarkbruce|4 months ago

You wildly misunderstand pytorch.

sieve|4 months ago

What is there to misunderstand? It doesn't even install properly most of the time on my machine. You have to use a specific python version.

I gave up on all tools that depend on it for inference. llama-cpp compiles cleanly on my system for Vulkan. I want the same simplicity to test model training.

nl|4 months ago

I suspect the OP's issues might be mostly related to the ROCM version of PyTorch. AMD still can't get this right.