top | item 45007398

(no title)

timClicks | 6 months ago

It's been a while since I've played in the area, but is PCA still the go to method for dimensionality reduction?

discuss

order

wenc|6 months ago

PCA (essentially SVD) the one that makes the fewest assumptions. It still works really well if your data is (locally) linear and more or less Gaussian. PLS is the regression version of PCA.

There are also nonlinear techniques. I’ve used UMAP and it’s excellent (particularly if your data approximately lies on a manifold).

https://umap-learn.readthedocs.io/en/latest/

The most general purpose deep learning dimensionality reduction technique is of course the autoencoder (easy to code in PyTorch). Unlike the above, it makes very few assumptions, but this also means you need a ton more data to train it.

ChadNauseam|6 months ago

> PCA (essentially SVD) the one that makes the fewest assumptions

Do you mean it makes the *strongest* assumptions? "your data is (locally) linear and more or less Gaussian" seems like a fairly strong assumption. Sorry for the newb question as I'm not very familiar with this space.

Lerc|6 months ago

From 2019 but a decent overview by Leland McInnes https://www.youtube.com/watch?v=9iol3Lk6kyU

There's a newer thing called PacMap which is an interesting thing that handles difference cases better. Not as robustly tested as UMAP but that could be said of any new thing. I'm a little wary that it might be overfitted to common test cases. To my mind it feels like PacMap seems like a partial solution of a better way of doing it.

The three stage process of PacMap is either asking to be developed into either a continuous system or finding a analytical reason/way to conduct a phase change.

wenc|6 months ago

Leland McInnes is amazing. He's also the author of UMAP.

baq|6 months ago

PCA is nice if you know relationships are linear. You also want to be aware of TSNE and UMAP.

wenc|6 months ago

A lot of relationships are (locally) linear so this isn’t as restrictive as it might seem. Many real-life productionized applications are based on it. Like linear regression, it has its place.

T-SNE is good for visualization and for seeing class separation, but in my experience, I haven’t found it to work for me for dimensionality reduction per se (maybe I’m missing something). For me, it’s more of a visualization tool.

On that note, there’s a new algorithm that improves on T-SNE called PaCMAP which preserves local and global structures better. https://github.com/YingfanWang/PaCMAP