top | item 45586686

(no title)

infinitezest | 4 months ago

I find LLMs really useful on a daily basis but I keep wondering, what's going to happen when the VC money dries up and the real cost of inference kicks in? Its relatively cheap now but its also being heavily subsidized. The usual answer is to just jam ads into your product and slowly increase the price over time (see: Netflix) but I don't know how that'll work for LLMs.

discuss

order

evolighting|4 months ago

you could localhost ollama, vLLM, or something like that; Open Models are good enough for simple task, With a bit of extra effort and learning, this is usually just works for most case.

But in that situation, there may be no further updates, the future remains uncertain.

Gigachad|4 months ago

Local llms are good for language based tasks where no specific knowledge is needed, but certainly not programming.

pickledonions49|4 months ago

I heard that photonic chip stuff might make running this stuff cheaper in data center environments.