This paper imports an arbitrarily-chosen aspect of cortical architecture — topological maps of function — and ignores every other aspect of biological neural tissue. The resulting models show lower performance for the same number of parameters — not surprising, since they are more constrained compared with baseline. They may be slightly more robust against pruning — not surprising, since they are more regularised.The figures show individual seeds, presumably, with no statistical analysis in the performance or pruning comparisons, so the null hypothesis is there is no difference between toponets and baseline. I would never let this paper be submitted by my team.
We haven't learned anything about the brain, or about ANNs.
brrrrrm|1 year ago
mayukhdeb|1 year ago
> it’s not scientifically useful
Having structured weights in GPTs enables us to localize and control various concepts and study stuff like polysemanticity, superposition, etc. Other scientific directions include sparse inference (already proven to work) and better model editing. Turns out, topographic structure also helps these models better predict neural data, which is yet another direction we're exploring in computational neuroscience.