top | item 38359109

(no title)

varunshenoy | 2 years ago

Good question. Yes, the 10GB available for batching is in the HBM. In a single forward pass, you move the entire model from HBM -> SRAM exactly once. In a batched forward pass, this is still the case, so you end up doing more compute for the same amount of memory movement.

You can calculate the SRAM as follows: an A100 has 108 SMs, and each SM has 192 KB in SRAM (shared memory, aka its L1 cache) [1]. Multiplied out, this is ~20 MB of total SRAM. This happens to match up with the diagram in the Flash Attention paper [2].

[1] https://developer.nvidia.com/blog/cuda-refresher-cuda-progra...

[2] https://arxiv.org/pdf/2205.14135.pdf

discuss

order

No comments yet.