Only if you’re relying upon the models to recall facts from its training set - intuitively, at sufficient complexity, models ability to reason is what is critical and can have its answers kept up to date with RAG.
Unless you mean out of date == no longer SOTA reasoning models?
If you're using the models to assist with coding—y'know, what this thread is about?—then they'll need to know about the language being used.
If you're using them for particular frameworks or libraries in that language, they'll need to know about those, too.
If training becomes uneconomical, new advances in any of these will no longer make it into the models, and their "help" will get worse and worse over time, especially in cutting-edge languages and technologies.
'ability to reason' implies that LLMs are building a semantic model from their training data, whereas the simplest explanation for their behavior is that they are building a syntactic model (see Plato's Cave). Thus without new training they cannot 'learn', RAG or no RAG.
danaris|27 days ago
If you're using them for particular frameworks or libraries in that language, they'll need to know about those, too.
If training becomes uneconomical, new advances in any of these will no longer make it into the models, and their "help" will get worse and worse over time, especially in cutting-edge languages and technologies.
SgtBastard|25 days ago
somewhereoutth|27 days ago
SgtBastard|25 days ago
https://github.com/dqxiu/ICL_PaperList