top | item 44838299

(no title)

Lukas_Skywalker | 6 months ago

Isn't an apology a bad metric for evaluating models?

Without understanding much, it seems to be more an indication of the type of content the model was trained on, rather than an indicator of how good or bad a model is, or how much it knows. It would probably be easy to create bad model that constantly outputs wrong information, but always apologizes when corrected.

discuss

order

Foobar8568|6 months ago

Well if the model can't accept it got an information wrong, how can he help to tweak anything? or give something accurate?

krick|6 months ago

A model changing its opinion on the first request may sound more flattering to you, but is much less trustworthy for anybody sane. With a more stubborn model I at I have to worry less that I give off what I think about a subject via subtle phrasing. Other than that, it's hard to say anything about your scenario without more information. Maybe it gave you the right information and you failed to understand it, maybe it was wrong and then it's no big news, because LLMs are not this magic thing that always gives you right answers, you know.