Nobody pointed this out yet. It would be very interesting to keep finding such perturbations that mess up learning and repeatedly add the new-found examples to the training set, retraining the model in the process. I wonder if after a finite number of iterations the resulting model would be near-optimal (impossible to perturb without losing its human recognizability) -- or, if this is impossible, if we could derive some proofs for why precisely this is impossible.
murbard2|11 years ago