That’s helpful information but it doesn’t mean the use of Gemini is unwelcome. A human could have rendered the initial analysis too, and then you could have just replied to the human, correcting him or her. Why is the source of the analysis such an issue?
nozzlegear|16 days ago
As for why people don't like LLMs being wrong versus a human being wrong, I think it's twofold:
1. LLMs have a nasty penchant for sounding overly confident and "bullshitting" their way to an answer in a way that most humans don't. Where we'd say "I'm not sure," an LLM will say "It's obviously this."
2. This is speculation, but at least when a human is wrong you can say "hey you're wrong because of [fact]," and they'll usually learn from that. We can't do that with an LLM because they don't learn (in the way humans do), and in this situation they're a degree removed from the conversation anyway.