top | item 43458174

(no title)

Swannie | 11 months ago

I find more hallucination - like when you're taught as a child to reflect back the question at the start of your answer.

If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.

I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.

discuss

order

No comments yet.