top | item 38144290

Ask HN: Versal HBM Series vs. Nvidia A100 for LLM Training?

2 points| manili | 2 years ago

Hello all, I am very curious to know how feasible it is to use a cluster of Versal HBM Series (e.g. VHK158 evaluation board) for training an LLM like LLaMA-2 70B in terms of 'Performance/Power/Cost.' Are there any papers regarding the comparison of a cluster of VHK158 evaluation boards and a cluster of, say, A100s?

Thanks.

2 comments

order

brucethemoose2|2 years ago

Well, this post is the second result for even running llama on Versal, so...

I think MLC-LLM (though TVM) can maybe run inference?

manili|2 years ago

Thanks a lot @brucethemoose2 Are there any evidences and studies (e.g. papers, case study, etc...) regarding possibility/impossibility of training or running the architecture on top of the infrastructure?