top | item 44384816

(no title)

beau_g | 8 months ago

The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.

That said, I do think it would be nice for people to note in pull requests which files have AI gen code in the diff. It's still a good idea to look at LLM gen code vs human code with a bit different lens, the mistakes each make are often a bit different in flavor, and it would save time for me in a review to know which is which. Has anyone seen this at a larger org and is it of value to you as a reviewer? Maybe some tool sets can already do this automatically (I suppose all these companies report the % of code that is LLM generated must have one if they actually have these granular metrics?)

discuss

order

acedTrex|8 months ago

Author here:

> The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.

Hmm, I was just saying I hadn't seen much literature or discussion on trust dynamics in teams with LLMs. Maybe I'm just in the wrong spaces for such discussions but I haven't really come across it.