top | item 45882771

(no title)

kennethallen | 3 months ago

Running LLMs will be slow and training them is basically out of the question. You can get a Framework Desktop with similar bandwidth for less than a third of the price of this thing (though that isn't NVIDIA).

discuss

order

embedding-shape|3 months ago

> Running LLMs will be slow and training them is basically out of the question

I think it's the reverse, the use case for these boxes are basically training and fine-tuning, not inference.

kennethallen|3 months ago

The use case for these boxes is a local NVIDIA development platform before you do your actual training run on your A100 cluster.