top | item 47147033

(no title)

lostmsu | 5 days ago

Regular models are very fast if you do batch inference. GPT-OSS 20B gets close to 2k tok/s on a single 3090 at bs=64 (might be misremembering details here).

discuss

order

rahimnathwani|5 days ago

Right but everyone else is talking about latency, not throughput.