(no title)
0x7cfe | 7 years ago
I'm not very familiar with their theory and technology, so please take my opinion with a grain of salt.
Both projects are inspired by the real brain. The differences are in perspective.
Here's a quote from their description:
"Every cortical column learns models of complete objects. They achieve this by combining input with a grid cell-derived location, and then integrating over movements (see Hawkins et al., 2017; Lewis et al., 2018 for details). This suggests a modified interpretation of the cortical hierarchy, where complete models of objects are learned at every hierarchical level, and every region contains multiple models of objects".
If I understood correctly, they state that each minicolumn remembers how an object is represented in this particular location meaning that each minicolumn has its own memory.
"For example, there is no single model of a coffee cup that includes what a cup feels like and looks like. Instead there are 100s of models of a cup. Each model is based on a unique subset of sensory input within different sensory modalities."
This is very different from our approach. In our case, minicolumns are independent context processors that work by mapping input stimuli to produce an interpretation and then using their local memory to estimate the validity of such interpretation. The key difference is that the whole cortex area has a single model of the object that is recognized in many contexts. The idea of a context as a set of interpretation rules is essential to our theory.
So, to put it simply, instead of remembering the cup in every possible scenario we remember only one concept of a cup, and then use the context space to find the right interpretation of the input. This allows us to train the models on a limited input and, more importantly, transfer the knowledge between different contexts to think by analogy.
P.S.: You may also be interested in the discussion that takes place on Reddit: https://www.reddit.com/r/artificial/comments/amogl2/brain_in... Hopefully, this helps you to feel the difference.
p1esk|7 years ago
It's strange that you're not familiar with an actively developing, widely known, 15 years old open source project with similar goals as your initiative. It's like if you wanted to create an OS similar to Linux, without bothering to learn much about Linux first. As an example of what I mean, you mention things like "brain is digital" and "combinatorial space", which seem to be rediscovery of Numenta's SDR. Also, I don't think your way of processing an object in different contexts is fundamentally different from Numenta's, based on what you described, however it's hard to say without looking at your code.
To avoid reinventing the bicycle, I suggest you engage in some discussion on Numenta's forums, or at least read their papers. It's quite possible that you might discover something they missed. But, if you don't know what they did, you will end up discovering a lot of what they haven't missed.
0x7cfe|7 years ago