top | item 28084518

(no title)

jogundas | 4 years ago

A nice example of overfitting!

However, it is hard to imagine an actual application of the process. If I understand it correctly, the author suggests using a set of micro-models for annotating a dataset which is then used to train another model. The latter model can actually detect Batman in a general environment, ie, can generalize. However, enriching a training dataset by adding adjacent frames depicting Batman from the same movie will likely have limited usefulness when training an actual Batman detection (non-micro!) model. Or do I get the final application wrong?

discuss

order

elandau25|4 years ago

Thanks! You have the application correct, but there are many ways by which we use this. An example is if you have trying to build models that require sequentially annotated images(like action recognition). Another is creating many micro-models that each only detect one type of object even though your general model will have to detect multiple objects.

In general, the theory of what you are saying is correct that this method annotates data that is correlated with the original set, but practically it is still quite useful. Having more ground truth to work with gives a lot more practical flexibility with things like sampling, testing your model, randomization, and training more robust versions of your model.