(no title)
giancarlostoro | 1 day ago
Also, because Apple in their infinite wisdom despite giving you a fan, very lazily turn it on (I swear it has to hit 100c before it comes on) and they give you zero control over fan settings, you may want to snag something like TG Pro for the Mac. I wound up buying a license for it, this lets you define at which temperature you want to run your fans and even gives you manual control.
On my 24G RAM Macbook Pro I have about 16GB of Inference. I use Zed with LM Studio as the back-end. I primarily just use Claude Code, but as you note, I'm sure if I used a beefier Mac with more RAM I could probably handle way more.
There's a few models that are interesting on the Mac with LM Studio that let you call tooling, so it can read your local files and write and such:
mistralai/mistralai-3-3b this one's 4.49GB - So I can increase my context window for it, not sure if it auto-compacts or not, have only just started testing it
zai-org/glm-4.6v-flash - This one is 7.09GB, same thing, only just started testing it.
mistralai/mistral-3-14b-reasoning - This one is 15.2GB just shy of the max, so not a TON of wiggle room, but usable.
If you're Apple or a company that builds things for Macs or other devices, please build something to help with airflow / cooling for the MBP / Mac Mini, it feels ridiculous that it becomes a 100c device I'm not so sure its great for device health if you want to use inference for longer than the norm.
I will probably buy a new Mac whenever the inference speeds increase at a dramatic enough rate. I sure hope Apple is considering serious options for increasing inference speed.
duskwuff|1 day ago
giancarlostoro|1 day ago
hypercube33|1 day ago
mikae1|1 day ago
gambiting|1 day ago
I have a base model M4 Mac Mini and it absolutely does have a fan inside it.
giancarlostoro|1 day ago