Agreed, it's definitely a problem, but I'm just saying that it's the basic problem of "people sometimes say bullshit that other people take at face value". It's not a technical problem. The most relevant approach to analyze this is probably https://en.wikipedia.org/wiki/Truth-default_theory
Are you suggesting that the AI chatbot have this built-in? Because the chances that I, an amateur who is writing about a subject out of passion, have gotten something wrong would approach 1 in most circumstances, and the ask that the person receiving the now recycled information will perform these checks every time they query an AI chatbot would be 0.
falcor84|2 months ago
justinator|2 months ago