(no title)
popinman322 | 1 year ago
Similarly, you can't use the LSP to determine all valid in-scope objects for an assignment. You can get a hierarchy of symbol information from some servers, allowing selection of particular lexical scopes within the file, but you'll need to perform type analysis yourself to determine which of the available variables could make for a reasonable completion. That type analysis is also a bit tricky because you'll likely need a lot of information about the type hierarchy at that lexical scope-- something you can't get from the LSP.
It might be feasible to edit an open source LSP implementation for your target language to expose the extra information you'd want, but they're relatively heavy pieces of software and, of course, they don't exist for all languages. Compared to the development cost of "just" using embeddings-- it's pretty clear why teams choose embeddings.
Also, if you assume that the performance improvements we've seen in embeddings for retrieval will continue, it makes less sense to invest weeks of time on something that would otherwise improve passively with time.
TeMPOraL|1 year ago
Clangd does, which means we could try this out for C++.
There's also tree-sitter, but I assume that's table stakes nowadays. For example, Aider uses it to generate project context ("repo maps")[0].
> If you want to know whether a given import is valid, to verify LLM output, that's not possible.
That's not the biggest problem to be solved, arguably. A wrong import in otherwise correct-ish code is mechanically correctable, even if by user pressing a shortcut in their IDE/LSP-powered editor. We're deep into early R&D here, perfect is the enemy of the good at this stage.
> Similarly, you can't use the LSP to determine all valid in-scope objects for an assignment. You can get a hierarchy of symbol information from some servers, allowing selection of particular lexical scopes within the file, but you'll need to perform type analysis yourself to determine which of the available variables could make for a reasonable completion.
What about asking an LLM? It's not 100% reliable, of course (again: perfect vs. good), but LLMs can guess things that aren't locally obvious even in AST. Like, e.g. "two functions in the current file assign to this_thread::ctx().foo; perhaps this_thread is in global scope, or otherwise accessible to the function I'm working on right now".
I do imagine Cursor, et. al. are experimenting with ad-hoc approaches like that. I know I would, LLMs are cheap enough and fast enough that asking them to build their own context makes sense, if it saves on the amount of time they get the task wrong and require back&forth and reverts and tweaking the prompt.
--
[0] - https://aider.chat/docs/languages.html#how-to-add-support-fo...