Nvidia just did what Intel/AMD should have done to threaten CUDA ecosystem - release a "cheap" 128GB local inference appliance/GPU. Well done Nvidia, and it looks bleak for any AI Intel/AMD efforts in the future.
I think you nailed it. Any basic SWOT analysis of NVidia’s position would surely have to consider something like this from a competitor - either Apple, who is already nibbling around the edges of this space, or AMD/Intel who could/should? be.
It’s obviously not guaranteed to go this route, but an LLM (or similar) on every desk and in every home is a plausible vision of the future.
mft_|1 year ago
It’s obviously not guaranteed to go this route, but an LLM (or similar) on every desk and in every home is a plausible vision of the future.
iszomer|1 year ago
unknown|1 year ago
[deleted]