(no title)
tygra | 6 months ago
What software do you use to run LLMs locally?
I tried Ollama but found requests a bit slow since it doesn't seem to fully utilize my resources. LM Studio has been better for me. It uses the upstream GGML implementation, which feels more optimized. My problem with LM Studio is that they've added so many options and features lately that it takes a lot of configuration.
What's your favorite open model for writing and coding these days?
jonahbenton|6 months ago
I have used ollama, lmstudio, jan and vllm at different times, am readying for a wholesale transition to llamacpp.
tygra|6 months ago
I run Qwen3 locally for coding and writing. It’s a solid model.
Framework Desktop is becoming really popular. The one with the Max+ 395 and 128GB of RAM is an absolute beast. I might buy a Beelink GTR9 Pro (Max+ 395 with 128GB RAM), which costs around $2,000.
llama.cpp is the real deal. I’m using it as the engine for the product I’m building right now (https://tygra.ai/).