(no title)
Tv9m | 3 years ago
But I've only seen this done with a single model. Sometimes it gets prompted to act like a different agent in different contexts, or given API access to external tools, but it's still just one set of weights.
Tv9m | 3 years ago
But I've only seen this done with a single model. Sometimes it gets prompted to act like a different agent in different contexts, or given API access to external tools, but it's still just one set of weights.
toss1|3 years ago
While Minsky & Papert's book on Perceptrons was enormously destructive, I think there is something to their general concept of Society Of Mind, that multiple sub-calculating 'agents' collude to actually produce real cognition.
We aren't doing conscious reasoning about the edges detected in the first couple layers of our visual cortex (which we can't really even access, 'tho I think Picasso maybe could). We're doing reasoning about the concepts of the people or objects or abstract concepts or whatever many layers up. The first layers are highly parallel - different parts of the retina connecting to different parts of the visual cortex, and then starting to abstract out edges, zones, motion, etc. and then synthesize objects, people, etc.
I think we need to take a GPT and a Stable Diffusion and some yet-to-be-built 3D spatial machine learning/reasoning engine, and start combining them, then adding more layer(s) synthesizing about that, and maybe that'll get closer to reasoning...