(no title)
stevenwalton | 4 years ago
CCT ([1] from above) was focused on training from scratch.
There's two paradigms to be aware of. ImageNet and pre-training can often be beneficial but it doesn't always help. It really depends on the problem you're trying to tackle and if there are similar features within the target dataset and the pre-trained dataset. If there is low similarity you might as well train from scratch. Also, you might not want as large of models (like ViT and DeiT have, which ViT's has more parameters than CIFAR-10 has features).
Disclosure: Author on CCT
version_five|4 years ago
stevenwalton|4 years ago
[0] https://github.com/lucidrains/vit-pytorch