top | item 23492265

(no title)

dmvaldman | 5 years ago

Zero shot and few-shot learning in GPT-3 and lack of significant diminishing returns in scaling text models. Zero-shot learning is equivalent to saying "i'm just going to ask the model something that it was not trained to do"

discuss

order

azinman2|5 years ago

And how do we get from zero shot to AGI? You're making gigantic leaps here.

simurg|5 years ago

For those who are wondering about reasoning behind this being the path to full AGI I recommend this Gwern post that goes into detail: https://www.gwern.net/newsletter/2020/05

From what I understand, its not just that the GPT-3 has impressive performance but more what is signifies and that is the fact that massive scaling didn't produce diminishing return, and if this pattern persists, it can get them to the finish line.

dmvaldman|5 years ago

what is the difference between zero-shot learning in text and AGI? not saying there isn't one, but can you state what it is?you can express any intent in text (unlike other media). to solve zero-shot in text is equivalent to the model responding to all intents.

many people have different definitions for AGI though. for me it clicked when i realized that text has this universality property of capturing any intent.