(no title)
laurencei | 1 year ago
As someone who doesnt understand ML - I have always assumed the whole point of ML is to try different things in the game, almost randomly, and over (long) periods of time the AI gets better and better at the game.
If having a single unexpected event causes such a large swing in outcome, and the AI cant "explain" what is different to cause the swing, then what exactly is the ML doing for it to fail on such a seemingly simple change? Doesnt that defeat the whole purpose of this?
I'm obviously missing something obvious - because I would assume the real goal of ML is that it can teach itself the game, even if that involves unexpected situations, as a human does?
schattschneider|1 year ago
laurencei|1 year ago
sigmoid10|1 year ago
queuebert|1 year ago
This tells me the algo is trying to hard to predict the game or learn a decent static strategy, rather than make situational decisions.
stetrain|1 year ago
If nobody includes the full moon message as input to the ML model, and tries to operate the ML model with the training it has achieved running in non-full-moon mode, its operating score in full-moon-mode may be lower.
Even if it had proportional training time against full-moon-mode to incorporate that into the model, if you don't tell it when full-moon-mode is active wouldn't the optimal behavior be to optimize the score for 27/28 days vs 1/28 days of the month?
If full-moon-mode is an input to the model, then it can trained to optimize for both scenarios.
shagie|1 year ago
I predict the next "annoying non-bug" will be Friday, June 13th of 2025.
laurencei|1 year ago