top | item 46443104

(no title)

lelele | 2 months ago

At a $1,000/month price point, wouldn't the economics start favoring buying GPUs and running local LLMs? Even if they're weaker, local models can still cover enough use cases to justify the switch.

discuss

order

No comments yet.