top | item 46923795

(no title)

KYRRO | 22 days ago

Ah, this actually connects a few dots for me. It helps explain why models seem to have a natural lifetime, once deployed at scale, they start interacting with and shaping the environment they were trained on. Over time, data distributions, usage patterns, and incentives shift enough that the model no longer functions as the one originally created, even if the weights themselves haven’t changed.

That also makes sense of the common perception that a model feels “decayed” right before a new release. It’s probably not that the model is getting worse, but that expectations and use cases have moved on, people push it into new regimes, and feedback loops expose mismatches between current tasks and what it was originally tuned for.

In that light, releasing a new model isn’t just about incremental improvements in architecture or scale; it’s also a reset against drift, reflexivity, and a changing world. Prediction and performance don’t disappear, but they’re transient, bounded by how long the underlying assumptions remain valid.

That means all the AI companies that "retire" a model is not because of their new better model only, but also because of decay?

PS. I clean wrote above with AI, (not native englishmen)

discuss

order

eric15342335|22 days ago

Correct me if I am wrong, I think this is related to the term "covariate shift" (change in model input distribution x) and "concept drift".

KYRRO|22 days ago

The interesting part is that its then not possible for true AGI with the current approach, since there is no ceiling/boundaries to "contain" it?