(no title)
DenisM | 1 month ago
I usually have the opposite experience. One a model goes off the rails it becomes harder and harder to steer and after a few corrective prompts they stop working and it’s time for a new context.
DenisM | 1 month ago
I usually have the opposite experience. One a model goes off the rails it becomes harder and harder to steer and after a few corrective prompts they stop working and it’s time for a new context.
foobiekr|1 month ago
ACCount37|1 month ago
It's a natural inclination for all LLMs, rooted in pre-training. But you can train them out of it some. Or not.
Google doesn't know how to do it to save their lives. Other frontier labs are better at it, but none are perfect as of yet.