top | item 45304691

(no title)

t1amat | 5 months ago

This is the right take. You might be able to get decent (2-3x less than a GPU rig) token generation, which is adequate, but your prompt processing speeds are more like 50-100x slower. A hardware solution is needed to make long context actually usable on a Mac.

discuss

order

No comments yet.