Re DGX, I’m mostly interested in local inference, it might have been nice to try but it was more expensive for similar performance (or so I think).
I do lots of different experiments, synthetic data generation along the lines of Magpie is one of the things I wanted a local machine for, as well as just general access to a decent sized LLM to try different things, without having to spin up a cloud machine each time.
I would prefer PyTorch / HF transformers to llama.cpp as I fine the latter less flexible if I want to change anything.
andy99|1 month ago
I do lots of different experiments, synthetic data generation along the lines of Magpie is one of the things I wanted a local machine for, as well as just general access to a decent sized LLM to try different things, without having to spin up a cloud machine each time.
I would prefer PyTorch / HF transformers to llama.cpp as I fine the latter less flexible if I want to change anything.