(no title)
KYRRO
|
22 days ago
I have a question. With the logic of neural networks, and pattern recognition, is it not then possible to "predict" everything in everything? Like predicting the future to an exact "thing"? Is this not a tool to manipulate for instace the stock market?
TuringTest|22 days ago
However there are two fundamental problems to computational predictions. The first one obviously is accuracy. A model is a compressed memorization of everything observed so far; a prediction with it is just projecting into the future the observed patterns. In a chaotic system, that goes only so far; the most regular, predictable patterns are obvious to everybody and give less return, and the chaotic system states where prediction would be more valuable are the less reliable. You cannot build a perfect oracle that would fix that.
The second problem is more insidious. Even if you were able to build a perfect oracle, acting on its predictions would become part of the system itself. That would change the outcomes, making the system behave in a different way as it was trained, and thus less reliable. If several people do it at the same time, there's no way to retrain the model to take into account the new behaviour.
There's the possibility (but not a guarantee) to reach a fixed point, that a Nash equilibrium would appear where such system becomes into a stable cycle, but that's not likely in a changing environment where everybody tries to outdo everyone else.
KYRRO|22 days ago
That also makes sense of the common perception that a model feels “decayed” right before a new release. It’s probably not that the model is getting worse, but that expectations and use cases have moved on, people push it into new regimes, and feedback loops expose mismatches between current tasks and what it was originally tuned for.
In that light, releasing a new model isn’t just about incremental improvements in architecture or scale; it’s also a reset against drift, reflexivity, and a changing world. Prediction and performance don’t disappear, but they’re transient, bounded by how long the underlying assumptions remain valid.
That means all the AI companies that "retire" a model is not because of their new better model only, but also because of decay?
PS. I clean wrote above with AI, (not native englishmen)
unknown|22 days ago
[deleted]
stuxnet79|22 days ago
moffkalast|22 days ago
unknown|22 days ago
[deleted]
unknown|22 days ago
[deleted]