top | item 46897226

(no title)

EagnaIonat | 25 days ago

https://ollama.com

Although I'm starting to like LMStudio more, as it has more features that Ollama is missing.

https://lmstudio.ai

You can then get Claude to create the MCP server to talk to either. Then a CLAUDE.md that tells it to read the models you have downloaded, determine their use and when to offload. Claude will make all that for you as well.

discuss

order

shen|24 days ago

Which local models are you using for the 32gb MacBooks?

EagnaIonat|24 days ago

Mainly gpt-oss-20b as the thinking mode is really good. I occasionally use granite4 as it is a very fast model. But any 4GB model should easily be used.