top | item 45141123

(no title)

m_a_g | 5 months ago

The cost can be significantly reduced immediately and drastically if OpenAI or Anthropic were to choose to do so.

By simply stopping the training of new models, profitability can be achieved on the same day.

With the existing models, we have already substantial use cases, and there are numerous unexplored improvements beyond the LLM, tailored specifically to the use case.

discuss

order

ten_hands|5 months ago

This only works if all the AI companies collude to stop training at the same time, since the company that trains the last model will have a massive market advantage. That not only seems extremely unlikely but is almost certainly illegal.

palata|5 months ago

> By simply stopping the training of new models, profitability can be achieved on the same day.

But then they stop being up-to-date with... the world, right?

antiloper|5 months ago

Current frontier models are not good enough because they still suffer from major hallucinations, sycophancy, and context drift. So there has to be at least (and I have no reason to believe it will be the last, GPT-5 demonstrates that the transformer architectures are hitting diminishing returns) one more training cycle.

crooked-v|5 months ago

Ah, but see, those existing uses cases allow for merely finite profit, instead of the infinitely growing profit that late stage capitalism demands.