top | item 41839775

(no title)

sha16 | 1 year ago

I think Claude trades quality for speed. From what I've seen it starts generating almost immediately even with a large token window. For smaller changes it is usually good enough, but larger changes are where I bump into issues as well. I'll stick to using change [somefunction] rather than change entire file.

discuss

order

richardw|1 year ago

I tend to iterate and limit the output by eg saying “make the smallest change possible, don’t rewrite the file, tell me what you want to do first” etc. It seems to respond well to change requests, with much apology. ChatGPT berates me about its own code and keeps saving shit I don’t want in memory, so I have to go back and clean up stuff like "Is testing Lambda functions directly in the AWS web console" and "Is working with testing and integration test fixtures in their software projects" when those are 2 of 100 things I'm doing. I'm using SAM for lambda, I might have run one in the console to bypass the API and now it's saved it as gospel. Half the benefit with LLM's is that they forget context when you start a new chat, so you can control what it focuses on.