there's zero understanding in any of this. This is still just superficial text parsing essentially. Show me progress on Winograd schema and I'd be impressed. It hasn't got anything to do with AGI, this is application of ML to very traditional NLP problems.
I'm skeptical. Amazing progress has been made in the last 5-10 years but it still feels like we need more paradigm shifting in the ML/AI field. It feels like we're approaching the upper limits of what stuffing mountains of data into model can do.
But with the speed of the field, maybe we can figure it out in three years. It just seems like we're still missing some key components. Primarily, reasoning and learning causality.
Zero shot and few-shot learning in GPT-3 and lack of significant diminishing returns in scaling text models. Zero-shot learning is equivalent to saying "i'm just going to ask the model something that it was not trained to do"
Barrin92|5 years ago
gwern|5 years ago
The paper evaluated Winograds: https://arxiv.org/pdf/2005.14165.pdf#page=16
dmvaldman|5 years ago
chundicus|5 years ago
But with the speed of the field, maybe we can figure it out in three years. It just seems like we're still missing some key components. Primarily, reasoning and learning causality.
azinman2|5 years ago
dmvaldman|5 years ago