top | item 38383453

(no title)

simonhughes22 | 2 years ago

This is just typical of so much work in the field. They pick and choose which models to compare against and on which benchmarks. If this model was truly great, they would be comparing against Claude 2 and GPT4 across a bunch of different benchmarks. Instead they compare against Palm 2, which in a lot of tests is a weak model (https://venturebeat.com/ai/google-bard-fails-to-deliver-on-i....) and prone to hallucination (https://github.com/vectara/hallucination-leaderboard).

discuss

order

No comments yet.