(no title)
soist
|
1 year ago
There is no such thing as too much optimization. Early stopping is to prevent overfitting to the training set. It's a trick just like most advances in deep learning because the underlying mathematics is fundamentally not suited for creating intelligent agents.
rocqua|1 year ago
soist|1 year ago