That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.
They believe so because we have spent decades using the term AI for another category of symbolic methods (search-based chess engines, theorem provers, planners). In the areas where they were successful, these methods _were_ infallible (of course, compared to humans and modulo programming bugs).
Meanwhile, neural techniques have flown under the public consciousness radar until relatively recent times, when they had a huge explosion in popularity. But the term "AI" had retained that old aura of superhuman precision and correctness.
jen729w|10 days ago
Whatever that says is hard fact as she's concerned. And she's no dummy -- she just has no clue how these things work. Oh, and Google told her so.
mcherm|10 days ago
lou1306|10 days ago
Meanwhile, neural techniques have flown under the public consciousness radar until relatively recent times, when they had a huge explosion in popularity. But the term "AI" had retained that old aura of superhuman precision and correctness.