Neural networks originated from coarse-grained analogies to a 1940s understanding of neurons. That’s about where the neuroscience connection ended. People have tried to make connections since then, but it’s almost always post-hoc.
If you listen to recent talks by Hinton (Capsule networks), LeCun (self-supervised learning), and Bengio (system 2 deep learning), as well as others, you'll find plenty of references to neuroscience, psychology, cognitive science, etc. There are always implementation differences, but the inspiration from brains is always there. The point of the book (which might be wrong, btw) is that the brain itself is an agent of the gene, which has evolved out of the need for better survival mechanisms. Therefore, it is suggesting that anything that has been modeled after the brain is—by extension—an agent of the main source of human intelligence (because it serves the goals of humans) and not intelligent by itself.
Hierarchical modeling. Spiking neural nets. Fire together, wire together. Convolution. Boltzmann nets. Autoencoding. LSTM gating. Attention, transformers, gans, etc.
GOFAI might not pull inspiration from the brain, but connectionist style AI, which represents the vast majority of ai being produced and operated, almost exclusively uses brains for inspiration.
canjobear|4 years ago
bendee983|4 years ago
robbedpeter|4 years ago
GOFAI might not pull inspiration from the brain, but connectionist style AI, which represents the vast majority of ai being produced and operated, almost exclusively uses brains for inspiration.