top | item 36012580

(no title)

spacetime_cmplx | 2 years ago

While OP's reply answers your question, it's important to not apply current costs to predict the future of AI. Hardware for LLMs is one step function away from unimaginable capabilities. That breakthrough could be in performance, cost, or more likely, both.

Imagine GPT-4 at 1/1000th the cost. That's where we're going. And you can bet your ass Nvidia is working on it as we speak. Or maybe someone else will leapfrog them like ARM did to Intel.

discuss

order