top | item 28084367

(no title)

elandau25 | 4 years ago

Hi everyone, I wrote the article. I do consider this overfitting because we are training on these frames way more time than would be normally advised for the size of the training set such that the error is essentially zero for these frames. The model performs well in "out-of-sample" here but only out of sample that is semantically close to the original training set. Besides, overfitting is defined procedurally, not by how well it performs. You could have an overfit model that just happens to perform well on some stuff it was not trained on, that doesn't change the fact that the model was overfit.

discuss

order

thaumasiotes|4 years ago

> Besides, overfitting is defined procedurally, not by how well it performs.

Huh? That's the opposite of the truth.

Compare https://en.wikipedia.org/wiki/Overfitting :

> In statistics, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit additional data or predict future observations reliably".

> The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e. the noise) as if that variation represented underlying model structure.

Procedural concerns are not part of the concept. Conceptually, overfitting means including information in your model that isn't relevant to the prediction you're making, but that is helpful, by coincidence, in the data you're fitting the model to.

But since that can't be measured, instead, you measure overfitting through performance.

elandau25|4 years ago

If we are taking wikipedia as ground truth, the next line is:

An overfitted model is a statistical model that contains more parameters than can be justified by the data.

Another definition from https://www.ibm.com/cloud/learn/overfitting#:~:text=Overfitt...:

Overfitting is a concept in data science, which occurs when a statistical model fits exactly against its training data.

We can argue over the precise definition of overfitting, but when you fitting a high-capacity model exactly to the training data, that is a procedural question and I would argue falls under the overfitting umbrella.

ALittleLight|4 years ago

I would think a clear sign of overfitting is if your training loss is decreasing while your validation loss is increasing. That tells me your model is learning to perform better on the training data at a cost of performing worse on more general data.

Is that happening when you train these micromodels? If not, I have a hard time seeing how it's overfitting because the model is still performing well for the data you train it on and use it on. If that is happening, then I don't see the benefit of it. A model that wasn't overfit would just do better at the task of collecting additional training data.

I think the approach you're talking about makes sense - create a simple model rapidly and leverage it to get more training data which you can then use to refine the model to be better still. I just don't think the term "overfitting" describes that process well - unless I'm misunderstanding something.

elanning|4 years ago

As I understand it, overfitting is low bias, but high variance. It's perfectly fitting 5 linear data points with a complex polynomial, when the underlying function was a line. Thus the polynomial doesn't generalize well to more data points not in the training set. Your model seems to be fitting points in the training set and the evaluation set just fine. Of course if different batman's were in the evaluation set, it would suddenly be doing terrible, but you can pretty much do that to every machine learning model. It wouldn't fit a lot of underlying assumptions of statistics and machine learning, eg i.i.d and evaluation sets/training sets being from the same distribution. Your definition of overfitting thus seems more like transfer learning, in some sense.

elandau25|4 years ago

I see what you are saying, but in that context then you lose what most people's intuitive definition of overfitting is. If I train a model on one image as my train set and then change one random pixel and run that model on this eval set then your argument would be that this is not overfitting because you are performing well on the eval set you created the model for.

My argument is that compared to models, as most people use them, micro-models are low bias and high variance, and thus overfit. That's why I set a distinction between a batman model and a batman micro-model.

MillenialMan|4 years ago

Is that overfitting? I thought overfitting was explicitly when your model fits itself so well to the training data that its ability to generalise to your target environment begins to decline. The size of the training set and the performance on it don't really matter, all that matters is that performance in the specific environment you intend to deploy the model in starts dropping. In your case, you're training the models for a very narrow deployment target, and they remain competent.

Couldn't you use this logic to say that AlphaGo is overfit because it can only play Go, not chess?