top | item 47072861

(no title)

mcherm | 10 days ago

That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.

discuss

order

lou1306|10 days ago

They believe so because we have spent decades using the term AI for another category of symbolic methods (search-based chess engines, theorem provers, planners). In the areas where they were successful, these methods _were_ infallible (of course, compared to humans and modulo programming bugs).

Meanwhile, neural techniques have flown under the public consciousness radar until relatively recent times, when they had a huge explosion in popularity. But the term "AI" had retained that old aura of superhuman precision and correctness.