top | item 47064133

(no title)

msp26 | 11 days ago

Horrific comparison point. LLM inference is way more expensive locally for single users than running batch inference at scale in a datacenter on actual GPUs/TPUs.

discuss

order

AlexandrB|11 days ago

How is that horrific? It sets an upper bound on the cost, which turns out to be not very high.