(no title)
machinelearning | 1 year ago
The current workflow is to use the embedding to retrieve documents then dump the text corresponding to the embedding into the LLM context for generation.
Often, the embedding is from a different model from the LLM and it is not compatible with the generation part.
So yea, RAG does not pre-compute the KV for each document.
Prosammer|1 year ago