(no title)
mstolpm | 2 years ago
For a public figure, of course there is lots of information in the training data, all public data. But when asked about me or my brother, ChatGPT either refuses to answer OR hallucinates the hell of it. Then, nearly everything is wrong and the output resembles the answer to a prompt like: "Create a short bio for a fictional character named xx, living in yy and working as zz." (Okay, often yy and zz are wrong either.)
Requesting to delete these hallucinated facts seems quite stubborn and ineffective?
No comments yet.