Hebb actually talked about causation, not synchrony (firing together):
"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ‘s efficiency, as one of the cells firing B, is increased”
Synchrony is extremely important, particularly for the formation of cortical columns and neural pruning.
But in spike timing dependent plasticity, where growth is potentiated if the presynaptic fires just before post synaptic firing, the connection is actually depressed if up and downstream neurons fire exactly synchronously. (there is a huge amount of variation in this across the brain, though)
Note that there is also a mechanism for association between two presynaptic neurons. Probabilistically, when those upstream neurons fire synchronously, the downstream neuron will be more likely to actually fire. When that occurs, the postsynaptic neuron will, as a result of Hebb's Postulate, increase the connectivity to the synchronously firing neurons. So, "cells that fire together wire together" is more true of presynaptic neurons than pre-to-post synaptic neurons (and the wiring together is occurring through the postsynaptic neuron)
I find it strange that the author couldn't find this in a textbook. This is rather common material in a developmental neuroscience textbook or lecture. I've looked through two such books during my (only) course on the topic, and all three sources covered this material.
What did they say ? Perhaps you misread what was in the textbooks? My understanding is that the authors questions are legitimate and still not fully answered.
The question of how neurons find each other to connect was recently studied with experimental connectomics--altering neurons and then mapping their synaptic circuits with electron microscopy--in this paper by Javier Valdes Aleman et al. 2019 https://www.biorxiv.org/content/10.1101/697763v1 , using Drosophila's somatosensory axons and central interneurons as a model.
If the OP's website Disqus worked (can't ever get the "post" button for comments after login), the above could have gone straight into the page.
The connectedness of neurons in neural nets is usually fixed from the start (i.e. between layers, or somewhat more complicated in the case CNNs etc). If we could eliminate this and let neurons "grow" towards each other (like this article shows), would that enable smaller networks with similar accuracy? There's some ongoing research to prune weights by finding "subnets" [1] but I haven't found any method yet where the network grows connections itself. The only counterpoint I can come up with is that is probably wouldn't generate a significant performance speed up because it defeats the use of SIMD/matrix operations on GPUs. Maybe we would need chips that are designed differently to speed up these self-growing networks?
I'm not an expert on this subject, does anybody have any insights on this?
I think this is a really interesting area of machine learning. Some efforts have been made in ideas that are tangential to this one. Lots of papers in neuroevolution deal with evolving topologies. NEAT is probably the prime example http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf and another paper I read recently called pathnet that is different but very interesting https://arxiv.org/abs/1701.08734.
I experimented with networks where weights were removed if they did not contribute much to the final answer.
My conclusion was I could easily set >99% of weights to zero on my (fully connected) layers with minimal performance impact after enough training, but the training time went up a lot (effectively after removing a bunch of connections, you have to do more training before removing more), and inference speed wasn't really improved because sparse matrices are sloooow.
Overall, while it works out for biology, I don't think it will work for silicon.
The only reason we architect ANNs the way we do is optimization of computation. The bipartite graph structure is optimized for GPU matrix math. Systems like NEAT have not been used at scale because they are a lot more expensive to train and to utilize the trained network with. ASICs and FPGAs have a change to utilize a NEAT generated network in production, but we still don't have a computer well suited to training a NEAT network.
When Hebb talks about "reverberation" in neural circuits, he still thinks in advance of our current knowledge of oscillatory neurodynamics. Here he speculates about the short term memory trace that holds position dynamically, prior to physical changes in the synapse:
"It might be supposed that the mnemonic trace is a lasting pattern of reverberatory activity without fixed locus like a cloud formation or eddies in a millpond"
Main point:
“(..) if the target neuron already has too many connections, it will tend to remove the weakest ones, and this includes the most recent ones. The scaling goes both ways after all – it goes for more synapses when it starts with too few, but for less, if it starts with too many.
But synaptic scaling is not everything. As it turns out, the tips of the growth cone constantly produce structures called filopodia, and these react to specific chemical attractants and repellents. These chemicals are produced by both cells at the target area, and by so-called guidepost cells along the way. There are suggestions that the system for such targeting is fairly robust, especially in early development (and its limitations in later life might explain why spinal cord injuries and the like are so hard to fix).“
This makes me want to know how quickly this type of growth happens. Is it on the order of seconds? Minutes? Hours? Days? Is this why when you learn something, take a break, come back later and everything makes more sense happens?
I've noticed that there's a weird area when learning a physical skill that there's a strange growth curve. You suck at first, then quickly get to some kind of milestone, then get worse before you get better. It feels like my brain is attempting to delegate some of the motor activity to lower levels before they are 'ready', but in fact it might be an essential part of training those neurons.
This is a bit of an open problem, to the point of being controversial. I'd hesitate to say anyone has a real answer even though we certainly have real experimental data. The philosophical takes range from:
* You never enter the same room twice.
* Your brain partially re-wires every time you sleep.
* Your brain rewires, but the way it rewires is surprisingly predictable and we can track the dynamics.
* Your brain is rewiring literally every second, but not every rewiring is functional - does this imply an implicit robustness?
Growth cones are only relevant during development and in regenerating neurons which are not common. Everyday neurons do however continuously extend (and contract) filopodia which may reach nearby axon terminals and eventually form a synapse, thus causing synaptic rewiring. Synaptic scaling is usually used to refer to a homeostatic and uniforum up- or down- scaling of synaptic weights and is not really relevant to rewiring.
What if something about the electrical signal attracts growth in certain direction, toward other signals firing at the same time?
Or what if some neuron pairs that are not yet connected share quantum entangled structures, that if activated simultaneously ... but still how does direction occur?
What if neurons emit light, that's why you can stimulate them with light...and what if they can somehow detect the faint light from other neurons and get the direction the light comes from, and grow towards that?
[+] [-] dr_dshiv|5 years ago|reply
"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A ‘s efficiency, as one of the cells firing B, is increased”
Synchrony is extremely important, particularly for the formation of cortical columns and neural pruning. But in spike timing dependent plasticity, where growth is potentiated if the presynaptic fires just before post synaptic firing, the connection is actually depressed if up and downstream neurons fire exactly synchronously. (there is a huge amount of variation in this across the brain, though)
[+] [-] dr_dshiv|5 years ago|reply
[+] [-] vinay427|5 years ago|reply
[+] [-] ramraj07|5 years ago|reply
[+] [-] throwitawayday|5 years ago|reply
If the OP's website Disqus worked (can't ever get the "post" button for comments after login), the above could have gone straight into the page.
[+] [-] g_airborne|5 years ago|reply
I'm not an expert on this subject, does anybody have any insights on this?
1. https://www.technologyreview.com/2019/05/10/135426/a-new-way...
[+] [-] tjwhitaker|5 years ago|reply
[+] [-] londons_explore|5 years ago|reply
My conclusion was I could easily set >99% of weights to zero on my (fully connected) layers with minimal performance impact after enough training, but the training time went up a lot (effectively after removing a bunch of connections, you have to do more training before removing more), and inference speed wasn't really improved because sparse matrices are sloooow.
Overall, while it works out for biology, I don't think it will work for silicon.
[+] [-] mennis16|5 years ago|reply
It is based on NEAT (as other commenters mentioned) and also ties in some discussion of the Lottery Ticket Hypothesis as you mentioned.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] blamestross|5 years ago|reply
The only reason we architect ANNs the way we do is optimization of computation. The bipartite graph structure is optimized for GPU matrix math. Systems like NEAT have not been used at scale because they are a lot more expensive to train and to utilize the trained network with. ASICs and FPGAs have a change to utilize a NEAT generated network in production, but we still don't have a computer well suited to training a NEAT network.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] hirenj|5 years ago|reply
“Synaptic Specificity, Recognition Molecules, and Assembly of Neural Circuits” by Sanes and Zipursky
https://doi.org/10.1016/j.cell.2020.04.008
For me, the hard part has always been understanding how this whole thing is orchestrated on a cellular and molecular level.
[+] [-] dr_dshiv|5 years ago|reply
"It might be supposed that the mnemonic trace is a lasting pattern of reverberatory activity without fixed locus like a cloud formation or eddies in a millpond"
From Hebb's 1948 "Organization of Behavior"
[+] [-] punnerud|5 years ago|reply
But synaptic scaling is not everything. As it turns out, the tips of the growth cone constantly produce structures called filopodia, and these react to specific chemical attractants and repellents. These chemicals are produced by both cells at the target area, and by so-called guidepost cells along the way. There are suggestions that the system for such targeting is fairly robust, especially in early development (and its limitations in later life might explain why spinal cord injuries and the like are so hard to fix).“
[+] [-] Mirioron|5 years ago|reply
[+] [-] jcims|5 years ago|reply
[+] [-] whymauri|5 years ago|reply
* You never enter the same room twice.
* Your brain partially re-wires every time you sleep.
* Your brain rewires, but the way it rewires is surprisingly predictable and we can track the dynamics.
* Your brain is rewiring literally every second, but not every rewiring is functional - does this imply an implicit robustness?
Etc, etc.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] buboard|5 years ago|reply
[+] [-] dr_dshiv|5 years ago|reply
[+] [-] greyface-|5 years ago|reply
[+] [-] sesuximo|5 years ago|reply
[+] [-] plutonorm|5 years ago|reply
[+] [-] mongojunction|5 years ago|reply
Or what if some neuron pairs that are not yet connected share quantum entangled structures, that if activated simultaneously ... but still how does direction occur?
What if neurons emit light, that's why you can stimulate them with light...and what if they can somehow detect the faint light from other neurons and get the direction the light comes from, and grow towards that?
[+] [-] dr_dshiv|5 years ago|reply
[+] [-] mikhailfranco|5 years ago|reply
enhance transitive closure on a temporal window
plus the dual negation, whatever that is
under the space-time corollary of De Morgan's Laws:
atrophy atemporal uncorrelated direct connection