(no title)
statusfailed | 2 months ago
This is a little outside my area, but I think the relevant part of that abstract is "Gradient-based optimization follows horizontal lifts across low-dimensional subspaces in the Grassmannian Gr(r, p), where r p is the rank of the Hessian at the optimum"
I think this question is super interesting though: why can massively overparametrised models can still generalise?
No comments yet.