True, no argument there. What fascinates me more is why people continue to think we can teach a chatbot how to recognize what's true and give us answers that we can't find for ourselves. At best a chatbot is going to be a tool that enables us to gain insights we didn't have before the same way a dictionary can "teach" you words you didn't know before.I think the idea of using technology to solve life's ultimate conundrums has long since jumped the shark and veered into the area of religious belief. People are literally putting their faith in AI even if they wouldn't use religious vocabulary to label and define it as such.
pjc50|1 year ago
The "Sokal Hoax" was a 90s experiment in which a physicist created a fake paper and submitted it to a cultural studies journal. He did not base his paper on anything he would have considered "true", rather on a desire to look as much like a valid text as possible. This is a simplified version of how the LLM training/scoring process works. Nowadays everywhere is having to deal with the same kind of thing done by LLM users. It's the perfect technology for non-rigorous academia.