top | item 45957660

(no title)

PrairieFire | 3 months ago

A future where we carry and manage just one device could be incredible. That said, today, even if iOS weren’t so locked down and more capable of that, I think I’d find myself frustrated. I run on device local llm’s on my iPhone and a heavily quantized 3b parameter model starts to cause the iPhones thermal management to heavily throttle after just a few prompts with light tokens, to the point it’s slower than 1 token per second for inference or response, and the phone gets hot to the touch. Maybe the rumored half iPhone half iPad device could be the eventual platform from which something like this emerges.

discuss

order

mark_l_watson|3 months ago

While my main driver is a maxed out MacMini hooked to an Apple Studio monitor, at least once a week I pack up and store my MacMini and plug an iPadPro into my large monitor for a few days.

So, I feel like I routinely experience what we are talking about in this sub-thread. Given a few VPS’s to ssh/mosh into for programming and a keyboard and mouse, this is a workable setup.

The one thing that always gets me to unpack my MacMini and set it up is that even with 16G shared memory on a iPadPro, I can only run local models in a chat-style app. On macOS, my LLM use is mostly embedded in experimental scripts and apps.

WorldPeas|3 months ago

exactly. The real shame of these devices is they're 99% of the way there but that last inch of running x script requiring you to whip out a form-identical device that has been blessed with the ability of running uncertified code is maddening to say the least

WorldPeas|3 months ago

perhaps that's what they're developing all these "private compute" servers for. Though I would be less than happy if Apple, the last (relatively) untaken hill of the SaaS enshittification wars were to go down that road. In the meantime I will continue to use my hilariously overpowered laptop as a SSH terminal to the machine I actually work on