top | item 46612088

(no title)

whoevercares | 1 month ago

Absolutely. LLM inference is still a greenfield — things like overlap scheduling and JIT CUDA kernels are very recent. We’re just getting started optimizing for modern LLM architectures, so cost/perf will keep improving fast.

discuss

order

No comments yet.