top | item 44057960

(no title)

yakbarber | 9 months ago

That's always been the case and was obvious to many from the start.

It really wont be that long until we see some ~GPT4 llm embedded locally in a chip on the next iPhone release...

discuss

order

lukan|9 months ago

Are you aware, what hardware is currently needed to run GPT4?

Something bigger than a smartphone usually.

So small mobile optimized LLMs will come, or are rather already there - but if they would manage to make the big GPT4 modell run on an iPhone, that would be a pretty big thing in itself, way larger than GPT5.

petra|9 months ago

But llms are relatively rarely used, and on the other hand, perf/latency is important to ux, and perf is variable(simple question, complex question, visual work).

Those demand are better fullfiled at the cloud.