top | item 44529423

(no title)

calaphos | 7 months ago

Inference throughout scales really well with larger batch sizes (at the cost of latency) due to rising arithmetic intensity and the fact that it's almost always memory BW limited.

discuss

order

No comments yet.