(no title)
lbrandy | 6 months ago
The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination. This presumption seems to be omnipresent in these conversations and I find it so strange. Alpha Zero doesn't model chess "the way we do".
klabb3|6 months ago
shkkmo|6 months ago
> The premise that an AI needs to do Y "as we do" to be good at X because humans use Y to be good at X needs closer examination.
I don't see it being used as a premise. It see it as speculation that is trying to understand why this type of AI underperforms at certain types of tasks. Y may not be necessary to do X well, but if a system is doing X poorly and the difference between that system and another system seems to be Y, it's worth exploring if adding Y would improve the performance.