top | item 44224943

(no title)

m0th87 | 8 months ago

That’s what I hope for, but everything that isn’t bananas expensive with unified memory has very low memory bandwidth. DGX (Digits), Framework Desktop, and non-Ultra Macs are all around 128 gb/s, and will produce single digits tokens per second for larger models: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inferen...

So there’s a fundamental tradeoff between cost, inference speed, and hostable model size for the foreseeable future.

discuss

order

No comments yet.