top | item 18433058

Gradient Descent Finds Global Minima of Deep Neural Networks

133 points| superfx | 7 years ago |arxiv.org

57 comments

order
[+] fwilliams|7 years ago|reply
It's worth noting that the primary result of this paper has only to do with the error on the training data under empirical risk minimization. Zero training error =/= a model that generalizes. For any optimization problem, you can always add enough parameters to achieve zero error on a problem over a finite training set (imagine introducing enough variables to fully memorize the map from inputs to labels).

The major contribution of the work is showing that ResNet needs a number of parameters which is polynomial in the dataset size to converge to a global optimum in contrast to traditional neural nets which require an exponential number of parameters.

[+] sytelus|7 years ago|reply
There is bit of difference between fitting dataset to some convenient parameterized function vs finding global minima of non-convex function. Also, paper claims that this can be done in polynomial time.

> The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet).

[+] MAXPOOL|7 years ago|reply
The way to tackle the problem you state would be finding similar bounds with regularization.
[+] iandanforth|7 years ago|reply
My hope is that as these bounds are refined we can start to do BotE calculations such as, "I have 50k training images of 512x512x3 and 1k classes, this means I'll need a Convolutional Resnet of at most 12 layers and 12M params to fit the training data so let's start at half that." Rather than today which is 'let's use resnet 101 and see if that works.'
[+] trevyn|7 years ago|reply
You have to include some sense of what the classes encode for this to have meaning — for example, “pictures of correct mathematical proofs” vs “pictures of incorrect mathematical proofs” is going to require a much different architecture than “pictures of squares” vs “pictures of circles”.
[+] ramgorur|7 years ago|reply
I did not understand the paper very well.

1. It's theoretically impossible to guarantee a convergence to global optima using gradient descent if the function is non-convex.

2. The only way to guarantee is to start the gradient descent from different points in the search space or try with different step sizes if the algorithm only starts from the same point in the search space.

3. Also does "achieving zero training loss" mean the network has converged to the global optima? I used to know you will get zero training loss even if you are at a local minima as well.

Please correct me if I am wrong.

[+] LolWolf|7 years ago|reply
1. > It's theoretically impossible to guarantee a convergence to global optima using gradient descent if the function is non-convex.

This is false. See, e.g., [0][1].

2. I'm not really sure what the question is here.

3. If your loss is bounded from below (it is a square norm) by 0 and you achieve 0 loss, this means that 0 is a global optimum, since, by definition, no other objective value can be smaller than this number.

---

[0] Theorem A.2 in Udell's Generalized Low-Rank models paper https://arxiv.org/pdf/1410.0342.pdf

[1] B&V Convex Optimization (https://web.stanford.edu/~boyd/cvxbook/), Appendix B.1. In fact, I can't find the reference right now, but you can easily prove that GD with an appropriate step-size converges to a global optimum on this problem when initialized at (0,0), even though the problem is non-convex.

[+] nshm|7 years ago|reply
The title of the paper is really misleading. The comments here are even more misleading. The key is their theorem where they say "with high probably over random initialization". They initialize many times and sometimes it converges. Single initialization can stuck in local minimum of course.
[+] TTPrograms|7 years ago|reply
1) Just because you can prove convergence for convex functions does not mean you can't prove it for non-convex functions.

2) This is "a way" not "The only way".

(If A then B) does not imply (if not A then not B)

[+] RandyRanderson|7 years ago|reply
FF NNs of even one hidden layer are universal approximators. That is, they do find the global min. What this doesn't tell you is that it's likely a huge graph and will take a looong time to optimize for even trivial data sets. There's lots of proofs around. That's why SGD is used, and for only a small subset of training points at a time.

Re 2: No.

Re 3: Yes.

[+] charleshmartin|7 years ago|reply
An excellent paper which uses (some of the) results we have also found studying the weight matrices of neural networks...namely that they rarely undergo rank collapse

https://calculatedcontent.com/2018/09/21/rank-collapse-in-de...

But they miss something..the weight matrices also display power law behavior.

https://calculatedcontent.com/2018/09/09/power-laws-in-deep-...

This is also important because it was suggested in the early 90s that Heavy Tailed Spin Glasses would have a single local mimima.

This fact is the basis of my early suggestion that DNNs would exhibit a spin funnel

[+] gogut|7 years ago|reply
This paper appears in November, but in fact, Allen-Zhu (MSR, http://people.csail.mit.edu/zeyuan/ ) already posted his result in Oct. This is their first paper in Oct:https://arxiv.org/pdf/1810.12065.pdf, this is their second paper in Nov https://arxiv.org/pdf/1811.03962.pdf . In MSR Oct paper, they proved how to train RNN (which is even harder than DNN). In their Nov paper, they proved how to train DNN. Compared to their Oct one, the Nov one is actually much easier. The reason is, in RNN, every layer has the same weight matrix, but in DNN every layer could have different weight matrices. Originally, they were not planning to write this DNN paper. Since someone is complaining that RNN is not multilayer neural network, that’s why they did it.

In summary, the difference between MSR paper and this paper is: if H denotes the number of layers, let m denote the number of hidden nodes. MRS paper can show we only need to assume m > poly (H), using SGD, the model can find the global optimal. However, in Du et al.’s work, they have a similar result, but they have to assume m > 2^{O(H)}. Compared to MSR paper, Du et al.’s paper is actually pretty trivial.

[+] brentjanderson|7 years ago|reply
Although I'm no expert, isn't this result an incredibly important contribution? This paper claims to prove that:

> The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet).

If this variant of gradient descent is able to reach global minima in polynomial time, and if neural networks are proven to approximate any function, then ostensibly this technique could be used to guarantee the lowest error possible in approximating any function. This seems incredibly important. Can someone correct my reading of the abstract?

[+] cosmic_ape|7 years ago|reply
This is not necessarily a trivial fact, but I wouldn't call it incredible. It says a net trained with gradient descent can fit the input perfectly if its much larger than the input.

But this says nothing about generalization, i.e. performance on test set -- which is what we really want.

[+] hooloovoo_zoo|7 years ago|reply
Well, without reading the whole paper, two important things strike me.

1. Zero training loss is impossible in most networks because the last layer can only reach the targets asymptotically.

2. Zero training loss means nothing from a practical standpoint. We've had algorithms capable of this for a long time (knn [k=1], decision trees etc.).

[+] Xcelerate|7 years ago|reply
The paper claims to reach the global minima of a given neural network in polynomial time. The time complexity of constructing a neural network that approximates any function is an entirely different matter. I’m not even sure how one would begin to approximate a highly algorithmic process (e.g., a hash function) using a neural network.
[+] jey|7 years ago|reply
I interpret it as saying: "non-convexity ain't no big deal in this context", and this is the latest contribution in a line of other "no spurious local minima" machine learning theory research. It's interesting for showing that it applies to a rather complex model like ResNet.
[+] juskrey|7 years ago|reply
How do you gradually detect a Dirac stick?
[+] pj_mukh|7 years ago|reply
With this much wide distribution of ML algorithms in use, its still funny to see papers begin with "One of the mysteries of deep learning is that...." and then goes on to lay out the multiple ways we have no idea why some of these DL techniques work.
[+] orf|7 years ago|reply
> One of the mysteries in deep learning is random initialized first order methods like gradient descent achieve zero training loss, even if the labels are arbitrary

Can someone expand on this? I've never heard of this before, at least not in the general case.

[+] currymj|7 years ago|reply
The paper “Understanding Deep Learning Requires Rethinking Generalization” was where this was first pointed out, I think.

They shuffled the labels on their datasets, so there can’t possibly be anything to learn, yet got zero training loss, meaning the network must be severely overfitting. Yet the same network trained with the actual labels shows quite good generalization. So the usual intuition about overfitting and the bias-variance tradeoff doesn’t seem to apply.

[+] lbj|7 years ago|reply
I cant claim to fully understand the proof, but these guys have done an amazing job in terms of furthering our understanding of deep nets.