top | item 37683953

(no title)

poser-boy | 2 years ago

This is frankly the best benchmark you can use on an LLM.

discuss

order

Breza|2 years ago

Especially since most devs have a specific use case in mind. Coming up with tests that are tailored to your needs will always be more informative than off the shelf metrics.

avereveard|2 years ago

You can ask gpt4 or other high value model to rate two chat logs for coherency etc, not as accurate as human evaluation, but you don't have to read thousand lines of text if comparing many models.

brucethemoose2|2 years ago

This is problematic if you are comparing a model in the same base family as the evaluator, as it will probably favor itself because it literally has the sequences it would naturally emit.