top | item 42483613

(no title)

pedrovhb | 1 year ago

Here's an idea: have the LLM output each comment with a "severity" score ranging from 0-100 or maybe a set of possible values ("trivial", "small", "high"). Let it get everything off of its chest outputting the nitpicks but recognizing they're minor. Filter the output to only contain comments above a given threshold.

It's hard to avoid thinking of a pink elephant, but easy enough to consciously recognize it's not relevant to the task at hand.

discuss

order

zahlman|1 year ago

The article authors tried this technique and found it didn't work very well.

iLoveOncall|1 year ago

Here's an idea: read the article and realize they already tried exactly that.