Seemed vaguely like how the brain does it. You think a thought, and somehow your brain conjures memories that are semantically related to that thought. That sounds a lot like a nearest neighbors algorithm on a language model embedding vector to me.
RETRO - Improving language models by retrieving from trillions of tokens (2021)
> With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters.
gamegoblin|3 years ago
visarga|3 years ago
RETRO - Improving language models by retrieving from trillions of tokens (2021)
> With a 2 trillion token database, our Retrieval-Enhanced Transformer (Retro) obtains comparable performance to GPT-3 and Jurassic-1 on the Pile, despite using 25× fewer parameters.
https://www.deepmind.com/publications/improving-language-mod...