(no title)
Lukas_Skywalker | 6 months ago
Without understanding much, it seems to be more an indication of the type of content the model was trained on, rather than an indicator of how good or bad a model is, or how much it knows. It would probably be easy to create bad model that constantly outputs wrong information, but always apologizes when corrected.
Foobar8568|6 months ago
krick|6 months ago