(no title)
craigds | 1 year ago
Cursor Composer doesn't handle that and seems geared towards a small handful of handpicked files.
Would codebuff be able to handle a proper sized codebase? Or do the models fundamentally not handle that much context?
craigds | 1 year ago
Cursor Composer doesn't handle that and seems geared towards a small handful of handpicked files.
Would codebuff be able to handle a proper sized codebase? Or do the models fundamentally not handle that much context?
jahooma|1 year ago
But Codebuff has a whole preliminary step where it searches your codebase to find relevant files to your query, and only those get added to the coding agent's context.
That's why I think it should work up to medium-large codebases. If the codebase is too large, then our file-finding step will also start to fail.
I would give it a shot on your codebase. I think it should work.
cratermoon|1 year ago
The code extruded from the LLM is still synthetic code, and likely to contain errors both in the form of extra tokens motivated by the pre-training data for the LLM rather than the input texts AND in the form of omission. It's difficult to detect when the summary you are relying on is actually missing critical information.
Even if the set up includes the links to the retrieved documents, the presence of the generated code discourages users from actually drilling down and reading them.
This is still a framing that says: Your question has an answer, and the computer can give it to you.
1 https://buttondown.com/maiht3k/archive/information-literacy-...
craigds|1 year ago
For anyone interested:
It required a bit of back and forth to produce a relatively small change, and I think it was a bit too narrow with the files it selected (it missed updating the implementations of a method in some subclasses, since it didn't look at those files)So I'm not sure if this saved me time, but it's nevertheless promising! I'm looking forward to what it will be capable of in 6mo.
asattarmd|1 year ago
Forgive my naivety, I don't now anything about LLMs.