top | item 47072732

(no title)

kulahan | 10 days ago

A probably unacceptably large portion of the population DOES think they’re infallible, or at least close to it.

discuss

order

jen729w|10 days ago

Totally. I get screenshots from my 79yo mother now that are the Gemini response to her search query.

Whatever that says is hard fact as she's concerned. And she's no dummy -- she just has no clue how these things work. Oh, and Google told her so.

mcherm|10 days ago

That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.

lou1306|10 days ago

They believe so because we have spent decades using the term AI for another category of symbolic methods (search-based chess engines, theorem provers, planners). In the areas where they were successful, these methods _were_ infallible (of course, compared to humans and modulo programming bugs).

Meanwhile, neural techniques have flown under the public consciousness radar until relatively recent times, when they had a huge explosion in popularity. But the term "AI" had retained that old aura of superhuman precision and correctness.