It’s inherent to their architecture. Feedforward deep neural networks are effectively massive piecewise linear functions. Unless I’m missing something, it’s literally impossible for any NN, no matter how large or how much data they’re trained on to give accurate out-of-training-data-bounds predictions for even a ridiculously simple function like x^2.
No comments yet.