OpenClaw is hyped for running local/private LLMs and controlling your data, but these people don't realize the difference between
(1) running local open source LLMs
(2) and API calls to cloud LLMs.
The vast majority will do #2. To your point, a Raspberry Pi is sufficient.
For the former, you still need a lot of RAM (+32GB for larger models) so most minis are underpowered despite having unified memory and higher efficiency.
If you're running local models, Apple Silicon's shared memory architecture makes them much better at it than other similarly-specced platforms.
If you want your "skills" to include sending iMessage (quite important in the USA), then you need a Mac of some kind.
If you don't care about iMessage and you're just doing API calls for the inference, then it's good old Mass Abundance. Nice excuse to get that cool little Mini you've been wanting.
Mac minis are particularly suited to running AI models because they can have a pretty good quantity of RAM (64GB) assigned to the GPU at a reasonable price compared to Nvidia offerings. Mac minis have unified memory which means it can be split between CPU and GPU in a configurable way. I think apple didn’t price mac minis with AI stuff in mind, so they end up being good value.
Sure but the GPUs are fairly anemic, right? I get that they have more Gpu-addressable memory from the shared pool.
I have a 10900K with 65GB RAN and a 3090 24GB VRAM lying around gathering dust. 24GB isn't as much as a Mac but my cores run a whole lot faster. I may be able to run a 34B 4bit quantized model in that. Granted, the mofo will eat a lot of power.
There's no need for (local) AI acceleration if you are leveraging a remote LLM (Claude, ChatGPT, etc). The vast, vast majority of users are most likely just making API calls to a remote service. No need for specialized or beefy hardware.
caminante|8 days ago
OpenClaw is hyped for running local/private LLMs and controlling your data, but these people don't realize the difference between
(1) running local open source LLMs
(2) and API calls to cloud LLMs.
The vast majority will do #2. To your point, a Raspberry Pi is sufficient.
For the former, you still need a lot of RAM (+32GB for larger models) so most minis are underpowered despite having unified memory and higher efficiency.
h14h|8 days ago
biztos|8 days ago
If you want your "skills" to include sending iMessage (quite important in the USA), then you need a Mac of some kind.
If you don't care about iMessage and you're just doing API calls for the inference, then it's good old Mass Abundance. Nice excuse to get that cool little Mini you've been wanting.
flutas|7 days ago
Because it's the easiest way to give "claw" iMessage access and that's the primary communication channel for a lot of the claw users I've seen.
correct_horse|8 days ago
sleight42|8 days ago
I have a 10900K with 65GB RAN and a 3090 24GB VRAM lying around gathering dust. 24GB isn't as much as a Mac but my cores run a whole lot faster. I may be able to run a 34B 4bit quantized model in that. Granted, the mofo will eat a lot of power.
denkmoon|8 days ago
rocmcd|7 days ago