(no title)
gzer0 | 1 year ago
The key insight we discovered was that explicitly enforcing brain-like topographic organization (as some academic work attempts - such as this one here) isn't necessary - what matters is having the right functional components that parallel biological visual processing. Our experience showed that the key elements of biological visual processing - like hierarchical feature extraction and temporal integration - emerge naturally when you build architectures that have to solve real visual tasks.
The brain's organization serves its function, not the other way around. This was validated by the real-world performance of our synthetic visual cortex in the Tesla FSD stack.
Link to the 2021 Tesla AI day talk: https://www.youtube.com/live/j0z4FweCy4M?t=3010s
lukan|1 year ago
It is amazing, that the synthetic pipeline, that was build to mimick the brain, seems to mimick the brain?
That sounds a bit tautological and otherwise I doubt we have really understood how our brain exactly interprets the world.
In general this is definitely interesting research, but worded like this, it smells a bit hyped to me.
Shorel|1 year ago
We can think of a solution space, with potentially many good solutions to the vision problem, and we can, in science fiction-like speculation, that the other solutions will be very different and surprise us.
Then this experiment shows its solution is the same we already knew, and that's it.
Then there aren't many good potential solutions, there is only one, and the ocean of possibilities becomes the pond of this solution.
trhway|1 year ago
perching_aix|1 year ago
iandanforth|1 year ago
dmarchand90|1 year ago
nickpsecurity|1 year ago
There’s at least three fields in this:
1. Machine learning using non-neurological techniques (most stuff). These use a combination of statistical algorithms stitched together with hyperparameter tweaking. Also, usually global optimization by heavy methods like backpropagation.
2. “Brain-inspired” or “biologically accurate”algorithms that try to imitate the brain. They sometimes include evidence their behavior matches experimental observations of brain behavior. Many of these use complex neurons, spiking nets, and/or local learning (Hebbian).
(Note: There is some work on hybrids such as integrating hippocampus-like memory or doing limited backpropagation on Hebbian-like architectures.)
3. Computational neuroscience which aims to make biologically-accurate models at various levels of granularity. Their goal is to understand brain function. A common reason is diagnosing and treating neurological disorders.
Making an LLM like the brain would require use of brain-inspired components, multiple systems specialized for certain tasks, memory integrated into all of them, and a brain-like model for reinforcement. Imitating God’s complex design is simply much more difficult than combining proven algorithms that work well enough. ;)
That said, I keep collecting work on both efficient ML and brain-inspired ML. I think some combination of the techniques might have high impact later. I think the lower, training costs of some brain-inspired methods, especially Hebbian learning, justify more experimentation by small teams with small, GPU budgets. Might find something cost-effective in that research. We need more of it on common platforms, too, like HughingFace libraries and cheap VM’s.
trhway|1 year ago
For the lower level - word embedings (word2vec, "King – Man + Woman = Queen") - one can see a similarity
https://www.nature.com/articles/d41586-019-00069-1 and https://gallantlab.org/viewer-huth-2016/
"The map reveals how language is spread throughout the cortex and across both hemispheres, showing groups of words clustered together by meaning."
chaumaha|1 year ago
[deleted]