top | item 45949854

(no title)

DrNosferatu | 3 months ago

Humans get things wrong too.

Quality prose usually only becomes that after many reviews.

discuss

order

gassi|3 months ago

AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.

loeg|3 months ago

AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.

rootlocus|3 months ago

Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.

ares623|3 months ago

Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_

DrNosferatu|3 months ago

If AI is used by “fire and forget”, sure - there’s a good chance of slop.

But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.

littlestymaar|3 months ago

Absolutely.

But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?

Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.

righthand|3 months ago

That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.

maxbond|3 months ago

That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.