For anyone else trying to run this on a Mac with 32GB unified RAM, this is what worked for me:
First, make sure enough memory is allocated to the gpu:
sudo sysctl -w iogpu.wired_limit_mb=24000
Then run llama.cpp but reduce RAM needs by limiting the context window and turning off vision support. (And turn off reasoning for now as it's not needed for simple queries.)
As the post says, LM Studio has an MLX backend which makes it easy to use.
If you still want to stick with llama-server and GGUF, look at llama-swap which allows you to run one frontend which provides a list of models and dynamically starts a llama-server process with the right model:
rahimnathwani|1 day ago
First, make sure enough memory is allocated to the gpu:
Then run llama.cpp but reduce RAM needs by limiting the context window and turning off vision support. (And turn off reasoning for now as it's not needed for simple queries.) You can also enable/disable thinking on a per-request basis: If anyone has any better suggestions, please comment :)suprjami|12 hours ago
Many user benchmarks report up to 30% better memory usage and up to 50% higher token generation speed:
https://reddit.com/r/LocalLLaMA/comments/1fz6z79/lm_studio_s...
As the post says, LM Studio has an MLX backend which makes it easy to use.
If you still want to stick with llama-server and GGUF, look at llama-swap which allows you to run one frontend which provides a list of models and dynamically starts a llama-server process with the right model:
https://github.com/mostlygeek/llama-swap
(actually you could run any OpenAI-compatible server process with llama-swap)
BoredomIsFun|1 day ago
regularfry|1 day ago
rahimnathwani|1 day ago