top | item 43913250

(no title)

Ardren | 9 months ago

> "...and in general be careful when working with headers"

I would love to know if there are benchmarks that show how much these prompts improve the responses.

I'd suggest trying: "Be careful not to hallucinate." :-)

discuss

order

swalsh|9 months ago

In general, if you bring something up in the prompt most LLM's will bring special attention to it. It does help the accuracy of the thing you're trying to do.

You can prompt an llm not to hallucinate, but typically you wouldn't say "don't hallucinate, you'd ask it to give a null value or say i don't know" which more closely aligns with the models training.

Alifatisk|9 months ago

> if you bring something up in the prompt most LLM's will bring special attention to it

How? In which way? I am very curious about this. Is this part of the transformer model or something that is done in the fine-tuning? Or maybe during the post-training?