top | item 42884874

(no title)

FrereKhan | 1 year ago

This paper imports an arbitrarily-chosen aspect of cortical architecture — topological maps of function — and ignores every other aspect of biological neural tissue. The resulting models show lower performance for the same number of parameters — not surprising, since they are more constrained compared with baseline. They may be slightly more robust against pruning — not surprising, since they are more regularised.

The figures show individual seeds, presumably, with no statistical analysis in the performance or pruning comparisons, so the null hypothesis is there is no difference between toponets and baseline. I would never let this paper be submitted by my team.

We haven't learned anything about the brain, or about ANNs.

discuss

order

brrrrrm|1 year ago

this paper plays into some popular fantasy about the aesthetic of ANNs. it’s not scientifically useful

mayukhdeb|1 year ago

If by popular fantasy you mean replicating the functional profiles of the visual and language cortex of the brain, then yes. These ideas in neuroscience are popular, but not fantasy. I encourage you to read up on functional organization in the brain, it's very fascinating.

> it’s not scientifically useful

Having structured weights in GPTs enables us to localize and control various concepts and study stuff like polysemanticity, superposition, etc. Other scientific directions include sparse inference (already proven to work) and better model editing. Turns out, topographic structure also helps these models better predict neural data, which is yet another direction we're exploring in computational neuroscience.