Some users are moving to local models, I think, because they want to avoid the agent's cost, or they think it'll be more secure (not). The mac mini has unified memory and can dynamically allocate memory to the GPU by stealing from the general RAM pool so you can run large local LLMs without buying a massive (and expensive) GPU.
I think any of the decent open models that would be useful for this claw frency require way more ram than any Mac Mini you can possibly configure.
The whole point of the Mini is that the agent can interact with all your Apple services like reminders, iMessage, iCloud. If you don’t need any just use whatever you already have or get a cheap VPS for example.
If the idea is to have a few claws instances running non stop and scrapping every bit of the web, emails, etc, it would probably cost quite a lot of money.
But if still feels safer to not have openAI access all my emails directly no?
They recommend a Mac Mini because it’s the cheapest device that can access your Apple reminders and iMessage. If you are into that ecosystem obviously.
If you don’t need any of that then any device or small VPS instance will suffice.
kator|8 days ago
ErneX|8 days ago
The whole point of the Mini is that the agent can interact with all your Apple services like reminders, iMessage, iCloud. If you don’t need any just use whatever you already have or get a cheap VPS for example.
trcf23|8 days ago
But if still feels safer to not have openAI access all my emails directly no?
duskdozer|8 days ago
for these types of tasks or LLMs in general?
ErneX|8 days ago
If you don’t need any of that then any device or small VPS instance will suffice.
lwhi|8 days ago
djfergus|8 days ago
znnajdla|8 days ago
ksynwa|8 days ago
azuanrb|8 days ago
znnajdla|8 days ago
bigyabai|8 days ago
00deadbeef|8 days ago