What's more is that memories are just a replaying of neuron connections activating in the brain - and when we are prompted by the world around us those connections will fire in response to the stimulus. Quite similar to how AI neural networks function - which is why I believe that AI can indeed be creative and create "new" ideas
ArcaneMoose|1 year ago
somenameforme|1 year ago
Training an LLM on the entirety of knowledge at this dawn of humanity and, even if you give it literally infinite training time, it's never going to go anywhere. It's going to just continue making relatively simple recombinations of its training set until somebody gives it a new training set to remix. This remix-only nature is no different with modern knowledge, but simply extremely obfuscated because there's such a massive base of information, and nobody is aware of anything more than a minuscule fraction of it all.
---
As for the 'secret' of LLMs, I think it's largely that most language is extremely redundant. One thought or point naturally flows.... why do I complete the rest of this statement? You already know exactly what I'm going to say, right? And from that statement the rest of my argument will also mostly write itself. Yet we do write out the rest, which is kind of weird if you think about it. Anyhow the point is that by looking at language 'flow correlations' over huge samples, LLMs can reconstruct and remix arbitrarily long dialogue from even the shortest of initial inputs. And it usually sounds at least reasonable, except when it doesn't and we call it a hallucination, but it's quite a misnomer because the entire process is a hallucination.