top | item 40193840

(no title)

ideamotor | 1 year ago

They won’t really be going with on-device AI, the models require way too much RAM, but maybe they claim on-device AI “pre-processes” and removes personal information before requests are sent to Apple. Apple has a marketing team to spin this better than my description.

discuss

order

seanmcdirmid|1 year ago

I think on-device models would be really useful. Imagine a conversational interface with much less latency so the conversations felt real. I wonder what kind of computing power we will need before we get there (e.g. running an LLM with lots of prompt data + on device speech recognition), maybe 5-10 years?

Shish2k|1 year ago

> Imagine a conversational interface with much less latency

With current models the latency comes from processing, not from the network — going from a high-power remote server to a low-power local phone is likely to increase latency more than it reduces it