(no title)
80hd | 1 year ago
Use vector embeddings to represent each task as a story, an abstraction of 1. the past, 2. the present, 3. the future - on a kind of global "story map".
Each embedding would be generated by all available sense inputs at a point in time. The most useful embeddings alg will be able to combine sight, hearing, internal monologue, visual imagination etc into one point on a high-dimensional map.
At each time step, find the closest successful "memory" (based on embedding of 1+2+3) and do some LLM exploration to adapt the memory to the new, novel situation.
Attempt the new "story", and do something like A* to get closer to the desired "future", tweaking the story each time and plotting failed attempts on the embedding map.
Theory being that over time, the map will become populated with successful attempts and embedding will be able to abstract between similar situations based on 1+2+3.
I'm not the guy to implement it, and I imagine new models training with a "reasoning step" are doing a similar thing at training-time.
johnsutor|1 year ago
bongodongobob|1 year ago
80hd|1 year ago
If we model "situations" in AI in a similar way, my intuition tells me it would be similarly useful.