(no title)
cmrdporcupine | 3 days ago
I have an NVIDIA Spark machine, and NVIDIA has a whole team building software, docker images, etc. to make these things relatively easy to use for LLM research etc and running local models because it just makes sense to pull people in like that to keep the market captured using their HW by keeping the software up to date.
Even more so for AMD who is behind on the software side by years and failed to do the above in the early days and lost out for it.
The DeepSeek 3 series models were quite popular, and quite capable. The new ones will likely be as well. Many people will want to host and run them. By making them run initially better on Huawei hardware than on NVIDIA it will encourage API hosting providers to buy Huawei hardware, and to get things like vLLM and llama.cpp working better on them.
unknown|3 days ago
[deleted]