(no title)
rrobukef | 1 year ago
"You train them by comparing the output to the original." ->
You train neural networks by producing output for known input, comparing the output with a cost-function to the expected output, and updating your system towards minimizing the cost, repeatedly, until it stops improving or you tire of waiting. Cost functions must have a minimal value when the output matches exactly the expected to work mathematically. Engineering-wise you can possibly fudge things and they probably do so ... now.
I don't agree with your critiques. It isn't an oversimplification, published code literally works as stated.
cdrini|1 year ago
Ah I see what they meant by that statement. It is true that supervised learning operates on labelled input/output pairs, and that neural networks generally use gradient descent/back propogation. (Disclaimer: it's been a few years since I've done any of this myself so don't quite remember it that well, and the field has changed a lot). Note since the parameter space of the neural network is usually _significantly_ smaller than the training data set, a network will not tend to minimise that cost function near 0 for an individual sample since doing so will worsen the overall result. There is inherent "fudging", although near identical output can potentially happen. The statement here is more reasonable and similar to the training process than the first.