top | item 39736072

(no title)

boringuser2 | 1 year ago

I think it's a little questionable to prompt language models with "bugs you're trying to solve".

discuss

order

lmeyerov|1 year ago

Curious why?

This is maybe 1/3 of my use of GPT4. Quite often, the log dump and nearby code is enough, often even without explicit instructions. Being able to do this task is similar to GitHub CoPilot code autocomplete working well too. Still not 100%, but right often enough that it flipped my use from not-at-all in GPT 3.5 to quite-often in GPT4.

boringuser2|1 year ago

LLMs aren't logical machines, so any non-trivial bug-fix is just likely to introduce more bugs.

It's a bit of a misunderstanding of how LLMs are supposed to be used.

One caveat is if you're very untalented, it might be able to solve very common patterns successfully.