+1 on feeling there's a lot of UX possibilities left on the table. Most seem to have accepted chat as the only means of using LLMs. In particular, I don't think most people realize that LLMs can be used in very powerful ways that just aren't possible with black-box API services as they currently exist. Google kind of has an edge on this area with recent context caching support for Gemini, but that's just one thing. Some things that feel like they could enable new modes of interaction aren't possible at all, like grammar constrained generation and rapid LLM-tool interactions (think a repl or shell rather than function calls; currently you have to pay for the input tokens all over again if you want to use the results of that function call as context and it adds up quickly).On Copilot, I've been using it since it was public, and have always found it useful, but it hasn't really changed much. There's a chat window now (groundbreaking, I know) and it shows a "processing steps" thing that says it's doing some distinct agentic tasks like collecting context and test run results and what have you, but it doesn't feel like it knows my codebase any better than the cursory description I'd give an LLM without context. I use the jetbrains plugin though, and I understand the vscode extension has some different features, so ymmv.
No comments yet.