Apple’s experience has almost nothing to do with “harnessing” LLMs, and everything to do with their wildly misjudged assumption they could run a viable model on a phone. Useful LLMs require their own power plants and can only be feasibly run in the cloud, or in a limited manner on powerful equipment like a 5090. Apple seems to have misunderstood that the “large” in large language model isn’t just a metaphor.
No comments yet.