top | item 46201608

(no title)

tsurba | 2 months ago

Many discriminative models converge to same representation space up to a linear transformation. Makes sense that a linear transformation (like PCA) would be able to undo that transformation.

https://arxiv.org/abs/2007.00810

Without properly reading the linked article, if thats all this is, not a particularly new result. Nevertheless this direction of proofs is imo at the core of understanding neural nets.

discuss

order

mlpro|2 months ago

It's about weights/parameters, not representations.

tsurba|2 months ago

True, good point, maybe not a straightforward consequence to extend to weights.