The test use case of constructing a bio for yourself, hoping it accurately summarizes all the extremely low sample size data it happens to have of you in its web crawled training data, seems like one of the worst possible use cases for ChatGPT. It’s right there on the main page that it’s not to be trusted with factual information like this. ChatGPT will hallucinate details. It’s remarkable to me actually how often it will refuse to hallucinate, given that’s basically what its job is. I don’t find it interesting to find all these edge cases where ChatGPT produces empirically false data. It doesn’t even have the ability to look things up! If I were the OP and wanted help writing my bio, I would first write the draft myself, then use ChatGPT to help with the editing, prose, grammar, style, etc. You are the expert on the factual details of your own life, and if you’re surprised that a language model trained on web crawled data ending in 2018 is not, then all I’ve learned is that you don’t know much about what this thing is.I also don’t buy these arguments of the form, 1. OpenAI’s public ChatGPT app is often factually inaccurate. 2. ChatGPT is an example of a ML system bootstrapped on web crawled text data. 4. Thus, the long term future of our distributed text-encoded knowledge base will be a cesspool of useless gobbledygook.
ChatGPT is a step forward in generative language modeling. It doesn’t preclude the development of other future systems to help us verify factual accuracy of claims, likely much better than humans can. We’ll be ok gang:)
mkmk3|3 years ago
khiner|3 years ago