(no title)
brucethemoose2 | 1 year ago
I can certainly appreciate frustration with the AMD stack, but be blunt, I was not impressed with Hotz's YouTube rant from before.[1] It didn't give the impression of a stable framework, and this doesn't either.
Also (at least from the end user llm inference side of things) ROCm is not nearly as unusable as it used to be. We would certainly be renting MI300s over A100s (or even H100s) if we could get any, and we use a number of different inference backends.
whalesalad|1 year ago
brucethemoose2|1 year ago
jauntywundrkind|1 year ago
brucethemoose2|1 year ago
There are some boutique hosts like Hot Aisle serving MI300s (who I really should reach out to), but for the immediate future our little startup is stuck with the big cloud providers. No MI300s for us mere mortals, not even to rent.
smoldesu|1 year ago
pjmlp|1 year ago