> The Nova family of models were trained on Amazon’s custom Trainium1 (TRN1) chips,10 NVidia A100 (P4d instances), and H100 (P5 instances) accelerators. Working with AWS SageMaker, we stood up NVidia GPU and TRN1 clusters and ran parallel trainings to ensure model performance parityDoes this mean they trained multiple copies of the models?
glomgril|1 year ago
Part of it could also be that they'd prefer to move all operations to the in-house trn chips, but don't have full confidence in the hardware yet.
Def ambiguous though. In general reporting of infra characteristics for LLM training is left pretty vague in most reports I've seen.