(no title)
ratedgene | 1 year ago
Maybe it's like:
1. Intention, context 2. Attention scanning for components 3. Attention network discovery 4. Rescan for missing components 5. If no relevant context exists or found 6. Learned parameters are initially greedy 7. Storage of parameters gets reduced over time by other contributors
I guess this relies on there being the tough parts: induction, deduction, abductive reasoning.
Can we fake reasoning to test hypothesis that alter the weights of whatever model we use for reasoning?
ratedgene|1 year ago