top | item 46528941

(no title)

DenisM | 1 month ago

> When a model exhibits hallucination, often providing more context and evidence will dispel it,

I usually have the opposite experience. One a model goes off the rails it becomes harder and harder to steer and after a few corrective prompts they stop working and it’s time for a new context.

discuss

order

foobiekr|1 month ago

Once it’s in the context window the model invariably steers crazy. Llms cannot handle the “don’t think of an elephant” requirement.

ACCount37|1 month ago

It depends.

It's a natural inclination for all LLMs, rooted in pre-training. But you can train them out of it some. Or not.

Google doesn't know how to do it to save their lives. Other frontier labs are better at it, but none are perfect as of yet.