top | item 17078514

(no title)

ramanan | 7 years ago

I understand that you do mention the pre-training / transfer learning approach clearly, but isn't it disingenuous to claim that you provide better performance based on (only) 100 labeled examples, when the pre-training dataset (Wikitext-103) actually contains 103M words?

discuss

order

jph00|7 years ago

Of course not. The use of pre-training on a large unlabeled corpus and subsequent fine-tuning is what the paper is about. It is stated repeatedly in the paper and the post.

It is totally correct and in no way misleading to say we need only 100 labeled examples. Anyone can get similar results on their own datasets without even needing to train their own wikitext model, since we've made the pre-trained model available.

(BTW, I see you work at a company that sells something that claims to "categorize SKUs to a standard taxonomy using neural networks." This seems like something you maybe could have mentioned.)

ramanan|7 years ago

Got it. I was looking for input on how generalizable (the ability of weights to change/adapt) when the training labeled data is 100x smaller than the initial pre-training dataset?

Also, I don't understand the need to be so defensive though and the relevance between my employer and my post?