top | item 46878139

(no title)

perrygeo | 26 days ago

I feel the same way. LLMs errors sound most plausible to those who know least.

On complex topics where I know what I'm talking about, model output contains so much garbage with incorrect assumptions.

But complex topics where I'm out of my element, the output always sounds strangely plausible.

This phenomenon writ large is terrifying.

discuss

order

No comments yet.