(no title)
zumachase | 5 years ago
However I think what's missing here is our benchmarks (a la Turing test) are about negation as opposed to affirmation. We tend to evaluate AI on whether or not we can discern the fact that it's AI. We seek to negate it as human, as opposed to affirming it as human (or close to). And this is not the right mindset when it comes to AGI because the gap between "obviously not human" and "human-like" is enormous. These are all definitely steps in the right direction, and the applications for even robotic process automation will be huge. But we're not even close to having nets that can reason about even the most basic things.
abernard1|5 years ago
I would question the value of the Turing test, and maybe think that's not a great example for AI.
There's always been this assumption that passing the Turing test would mean we had AI, but I think that was always predicated on the machine generating the outputs. With the GPT- models, it's not clear that this isn't a form of compression over an immense data set, and we're sending pre-existing _human_ responses back to the user. It implies to me that we can pass the Turing test with a large enough data set and no (or very little) intelligence.
All of this makes me believe "These are all definitely steps in the right direction" is questionable.