I really hope we see AI-PU (or with some other name, INT16PU, why not) for the consumer market sometime soon. Or been able to expand GPU memory using a pcie socket (not sure if technically possible).
My uninformed question about this is why can't we make the VRAM on GPUs expandable? I know that you need to avoid having the data traverse some kind of bus that trades overhead for wide compatibility like PCIe but if you only want to use it for more RAM then can't you just add more sockets whose traces go directly to where they're needed? Even if it's only compatible with a specific type of chip it would seem worthwhile for the customer to buy a base GPU and add on however much VRAM they need. I've heard of people replacing existing RAM chips on their GPUs[0] so why can't this be built in as a socket like motherboards use for RAM and CPUs?
PeterisP|1 year ago
throwaway4aday|1 year ago
[0] https://www.tomshardware.com/news/16gb-rtx-3070-mod
unknown|1 year ago
[deleted]
rdsubhas|1 year ago
https://en.m.wikipedia.org/wiki/AI_accelerator
hhsectech|1 year ago