top | item 46469444

(no title)

dwringer | 1 month ago

For me it just depends. If the response to my prompt shows the model misunderstood something, then I go back and retry the previous prompt again. Otherwise the "wrong ideas" that it comes up with persist in the context and seem to sabotage all future results. The most of this sort of coding I've done was in Google's AI studio, and I often do have a context that spans dozens of messages, but I always rewind if something goes off-track. Basically any time I'm about to make a difficult request, I clone the entire context/app to a new one so I can roll back [cleanly] whenever necessary.

discuss

order

seanmcdirmid|1 month ago

If you fix something it sticks, the AI won't keep making the same mistake, it won't change the code that already exists if you ask it not to. It actually ONLY works well when you are doing iterative changes and not used as a pure code generator, actually, AI's one-shot performance is kind of crap. A mistake happens, you point it out to the LLM and ask it to update the code and the instructions used to create the code in tandem. Or you just ask it to fix the code once. You add tests, partially generated by the AI and curated by a human, the AI runs the tests and fixes the code if they fail (or fixes the tests).

dwringer|1 month ago

All I can really say is that doesn't match my experience. If I fix something that it implemented due to a "misunderstanding" then it usually tends to break it again a few messages later. But I would be the first to say the use of these models is extremely subjective.