top | item 35630677

(no title)

andrewcamel | 2 years ago

On this topic, Apple is the sleeping giant. Sleeping tortoise maybe. Everyone else has been fast out of the gates, but Apple has effectively already been positioning to leap frog everyone after a decade+ of M1 chip design. Ever since these chips launched, the M1 chips have felt materially underutilized, particularly their GPU compute. Have to believe something big is going on behind the scenes here.

That said, wouldn't be surprised if the truth was somewhere in between cloud-deployed and locally deployed, particularly on the way up to the asymptotic tail of the model performance curve.

discuss

order

smoldesu|2 years ago

What would a "leap frog" look like, in your mind? I'm struggling to imagine how they're better positioned than the competition, especially after llama.cpp showed us that inference acceleration works with everything from AVX2 to ARM NEON. Compared to Nvidia (or even Microsoft and ONNX/OpenAI), Apple is somewhat empty-handed here. They're not out of the game, but I genuinely see no path for them to dominate "everyone".

yunwal|2 years ago

My guess is a leapfrog would have more to do with how LLMs are integrated into an operating system, rather than just coming out with a better model. I don’t think we’re gonna get a substantially more capable LLM than GPT-4 anytime soon, but fine-tuning it to sit on top of the core of an operating system could yield results.

ohgodplsno|2 years ago

M1 GPUs are barely real-world tested, alright chips. They're far from being a sleeping giant.