top | item 41119797

Why is the storage cluster used to train Llama 3 so slow?

2 points| 1a1a11a | 1 year ago |scontent.fagc1-2.fna.fbcdn.net

1 comment

order

1a1a11a|1 year ago

"Tectonic (Pan et al., 2021), Meta’s general-purpose distributed file system, is used to build a storage fabric (Battey and Gupta, 2024) for Llama 3 pre-training. It offers 240 PB of storage out of 7,500 servers equipped with SSDs, and supports a sustainable throughput of 2 TB/s and a peak throughput of 7 TB/s"

I would expect at least 10s if not 100s TB/s from a cluster of 7500 servers with SSDs, what is the bottleneck?