(no title)
ericskiff | 1 year ago
We don’t yet know the reason for that structure, but it appears across species and across different regions of the brain
It could be that a similarly modeled neuronal network would exhibit more pockets of specialized activity in response to certain inputs, and potentially even redundancies in specialization allowing it to process the inputs in slightly different ways to reach ensemble/consensus style outcomes. In some ways it sounds reminiscent of the mixture of experts approach
It’s certainly interesting!
caseyy|1 year ago
Anyways, this tendency seems to continue throughout the body in neural nexuses. And even outside the body in how we communicate our thoughts in social circles.
I also wonder if language itself is deeply related to this structure. Zipf’s law relates to the long tail. Semantic space models also often show clusters of words and symbols. We have synonyms for most symbols we speak about, and often antonyms, but rarely are we able to think about something that’s about in the middle. But a LLM could have parameters such that it would produce semantically unrelated words “in the middle” if it was poorly fitted. In other words, if it was bad at producing language. So clearly, there is a property of at least language-based neural nets that they are necessarily related to the long tail probability distributions.
Perhaps many more things than we commonly think are on a long tail distribution curve when it comes to neural nets, real brains, and our society. If only the normal distribution wasn’t so intoxicating to the academia and we paid other probability distributions the same amount of attention…
T-Winsnes|1 year ago
LoveMortuus|1 year ago