top | item 47121792

(no title)

eqvinox | 6 days ago

> That makes it possible to use the entire unified RAM as the GPU RAM, and reasonably run decent ML models (for code, text, audio, pictures) locally. No CUDA, no kilowatt power supplies. This is the real differentiator.

That might be relevant and a differentiator in your circles; it is entirely irrelevant in mine. Plain basic integer performance wins here.

discuss

order

No comments yet.