top | item 47109237

(no title)

punnerud | 8 days ago

Could we all get bigger FPGAs and load the model onto it using the same technique?

discuss

order

fercircularbuf|8 days ago

I thought about this exact question yesterday. Curious to know why we couldn't, if it isn't feasible. Would allow one to upgrade to the next model without fabricating all new hardware.

wmf|8 days ago

FPGAs have really low density so that would be ridiculously inefficient, probably requiring ~100 FPGAs to load the model. You'd be better off with Groq.

menaerus|8 days ago

Not sure what you're on but I think what you said is incorrect. You can use hi-density HBM-enabled FPGA with (LP)DDR5 with sufficient number of logic elements to implement the inference. Reason why we don't see it in action is most likely in the fact that such FPGAs are insanely expensive and not so available off-the-shelf as the GPUs are.

sowbug|7 days ago

FPGAs aren't very power-efficient. You could do it, but the numbers wouldn't add up for anything but prototyping.