top | item 39893974

(no title)

aliasaria | 1 year ago

If you're able to purchase a separate GPU, the most popular option is to get an NVIDIA RTX3090 or RTX4090.

Apple Mac M2 or M3's are becoming a viable option because of MLX https://github.com/ml-explore/mlx . If you are getting an M series Mac for LLMs, I'd recommend getting something with 24GB or more of RAM.

discuss

order

ein0p|1 year ago

You don’t need MLX for this. Ollama, which is based on llama.cpp is GPU accelerated on a Mac. In particular it has better performance on quantized models. MLX can be used for eg fine tuning etc. It’s a bit faster than PyTorch for that.