top | item 41851753

(no title)

jacinabox | 1 year ago

One would think that the difficulty of making a company profitable when it's training larger and larger LLMs, combined with the diminishing returns and model collapse phenomenon, would make it so that companies don't wish to stop training larger models. I assume that they continue training larger models because whichever company stops training larger models would fall behind in the race to win new rounds of funding, but if that is the case what is the ultimate valuation that these companies are trying to achieve being valued in the biliions already?

Diminishing returns means that the user gets less marginal benefit with each larger model, and the model collapse phenomenon means that models trained on new training data might be less good than older models. Have straightforward mitigations been put in place such as filtering out from the training data forums where users like to share AI generated content?

discuss

order

minimaxir|1 year ago

> but if that is the case what is the ultimate valuation that these companies are trying to achieve being valued in the biliions already?

That's why OpenAI/Sam Altman has been memeing AGI. None of this will work unless they make God.