I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought. I've messed around with attaching reservoirs at each layer using hooks with interesting results (mainly over fitting), but it feels like such a limitation to have all model context/memory stuck as tokens when latent space is where the richer interaction lives. Would love to see more done where thought over time mattered and the model could almost mull over the question a bit before being obligated to crank out tokens. Not an easy problem, but interesting.
dkersten|6 months ago
CuriouslyC|6 months ago
vonneumannstan|6 months ago
Please stop, this is how you get AI takeovers.
adastra22|6 months ago
varelse|6 months ago
[deleted]