top | item 37810508

Evaluating 55 LLMs with GPT-4

36 points| vincelt | 2 years ago |benchmarks.llmonitor.com

8 comments

order

bradknowles|2 years ago

How is this benchmark not inherently biased towards GPT?

If I did the same sort of thing but used Claude to grade the tests, would I get similar results? Or would that be inherently biased towards Claude scoring high?

crashocaster|2 years ago

I always find evals of this flavor offputting given that 3.5 and 4 likely share preference models (or at least feedback data)

habitue|2 years ago

Should be evaluating each prompt multiple times to see how much variance in the scores there are. Even gpt-4 grading gpt-4 should probably be done several times

natsucks|2 years ago

Why no multi-turn evaluation? A lot of these benchmarks fail to capture the strength of ghost attention used in Llama 2 chat models.

aiunboxed|2 years ago

Any reason why palm or cohere models are not here ?

londons_explore|2 years ago

GPT-4-0314 is top of the league table (ie. Not the latest version, but the version released in March).

Is this our Concorde moment?

ionwake|2 years ago

Really cool thanks