This graph is a high level abstraction, of course. How exactly neurons store information is interesting, but hardly relevant here. My guess is that one symbol is stored in a very sparse subset of neurons and each neuron acts a bit like a node in a DHT. All together these neurons implement a fast DHT where a word2vec graph node acts as a key. On top of that this "wet DHT" can quickly find keys nearby, i.e. in can instantly return all neighbors of word2vec("apple").I think ANNs implement only the word2vec function that translates images or sounds into symbols and vice versa.
No comments yet.