top | item 47109077

(no title)

moralestapia | 7 days ago

>HOW NVIDIA GPUs process stuff? (Inefficiency 101)

Wow. Massively ignorant take. A modern GPUs is an amazing feat of engineering, particularly about making computation more efficient (low power/high throughput).

Then proceeds to explain, wrongly, how inference is supposssedly implemented and draws conclusions from there ...

discuss

order

beAroundHere|7 days ago

Hey, Can you please point out explain the inaccuracies in the article?

I had written this post to have a higher level understanding of traditional vs Taalas's inference. So it does abstracts lots of things.

wmf|7 days ago

Arguably DRAM-based GPUs/TPUs are quite inefficient for inference compared to SRAM-based Groq/Cerebras. GPUs are highly optimized but they still lose to different architectures that are better suited for inference.

imtringued|7 days ago

The way modern Nvidia GPUs perform inference is that they have a processor (tensor memory accelerator) that directly performs tensor memory operations which directly concedes that GPGPU as a paradigm is too inefficient for matrix multiplication.