top | item 39904245

(no title)

tutfbhuf | 1 year ago

[flagged]

discuss

order

loloquwowndueo|1 year ago

We don’t understand how AI works. I would not trust an AI to not hallucinate unsafely in this context.

zopa|1 year ago

You wouldn’t want it in the CI pipeline, because any model clever enough to find real issues is also going to find plenty of false positives. That seems like too much friction for most open source projects.

I’m not one of the downvoters, but you’ve linked to a list of forty or fifty different projects, many of which don’t seem relevant to this use-case. It’s not too surprising people have nothing to say besides “ugh, more AI hype.”

VMG|1 year ago

You are wrong but there is no reason to downvote you. The idea might work in the future.

Kwpolska|1 year ago

It might work if AI stops producing random output and bullshit (which proponents call hallucinations to make it sound nice), and produces correct responses deterministically. Which may take forever.