top | item 38539223

(no title)

dmakian | 2 years ago

> A question, when GPT-4 contradicts in explanation, how much of them were in fact correct?

It was mostly when a card is good in a vacuum but not as good in a specific set. WOE (which this was trained on) skewed pretty aggressive, so GPT-4 was tended to overvalue strong expensive cards (compared to what good players thought at least).

discuss

order

No comments yet.