(no title)
apinstein | 1 year ago
So the lack of that sensor will cause the brain to develop poor representations of motion in 3d space.
How lack of those representations would affect other representations is less clear; because seeing the fusion between the LLM (which similarly doesn't have an embodied world model representation) and the robot AI (which presumable does) obviously works really well.
Now, it's possible that the 2 models are just inter-communicating between their own features (apple the concept and apple the image/object) and then being able to connect that together. The point of this meaning that there could be benefits from separate training and then post-training connection to bridge any gaps in learned representations.
However, I'd think that ultimately a model that can train simultaneously on more sensory input vs less will have a better/more efficient world model with more useful & interesting cross-connections between that space and applied uses in non-physical domains.
beambot|1 year ago
uoaei|1 year ago