I find tools where I am manually shepherding the context into an LLM to work much better than Copilot at current. If I think thru the problem enough to articulate it and give the model a clear explanation, and choose the surrounding pieces of context (the same stuff I would open up and look at as a dev) I can be pretty sure the code generated (even larger outputs) will work and do what I wanted, and be stylistically good. I am still adding a lot in this scenario, but it's heavier on the analysis and requirements side, and less on the code creation side.If what I give it is too open ended, doesn't have enough info, etc, I'll still get a low quality output. Though I find I can steer it by asking it to ask clarifying questions. Asking it to build unit tests can help a lot too in bolstering, a few iterations getting the unit tests created and passing can really push the quality up.
No comments yet.