In general, if you bring something up in the prompt most LLM's will bring special attention to it. It does help the accuracy of the thing you're trying to do.
You can prompt an llm not to hallucinate, but typically you wouldn't say "don't hallucinate, you'd ask it to give a null value or say i don't know" which more closely aligns with the models training.
> if you bring something up in the prompt most LLM's will bring special attention to it
How? In which way? I am very curious about this. Is this part of the transformer model or something that is done in the fine-tuning? Or maybe during the post-training?
I'm thinking if the org that trained the model, and is doing interesting research of trying to understand how LLMs actually work on the inside [1], their caution might be warranted.
swalsh|9 months ago
You can prompt an llm not to hallucinate, but typically you wouldn't say "don't hallucinate, you'd ask it to give a null value or say i don't know" which more closely aligns with the models training.
Alifatisk|9 months ago
How? In which way? I am very curious about this. Is this part of the transformer model or something that is done in the fine-tuning? Or maybe during the post-training?
bezier-curve|9 months ago
[1] https://www.anthropic.com/research/tracing-thoughts-language...