top | item 40364521

(no title)

shoelessone | 1 year ago

> My intuition is that as contexts get longer we start hitting the limits of how much comprehension can be embedded in a single point of vector space, and will need better architectures for selecting the relevant portions of the context.

Is it possible to explain what this means in a way that somebody only roughly familiar with vectors and vector databases? Or recommend an article or further reading on the topic?

discuss

order

causal|1 year ago

So most of my understanding comes from this series, particularly the last two videos: https://www.3blue1brown.com/topics/neural-networks

Essentially each token of a text occupies a point in a many-dimensional model that represents meaning, and LLMs predict the next token by modifying the last token with the context of all the tokens before it. Attention heads are basically a way of choosing which prior tokens are most relevant and adjusting the last token's point in vector-space accordingly.