Yann LeCun: There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning:
And this, below.
All three use some form of Siamese net.
3 replies, 1093 likes
Aäron van den Oord: Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection
4 replies, 781 likes
Yann LeCun: SSL FTW!
Pretext-Invariant Representation Learning: a self-supervised method based on Siamese nets for visual feature learning from FAIR.
Beats supervised pre-training & all previous SSL methods on ImageNet, VOC-07-12, etc. https://arxiv.org/abs/1912.01991
2 replies, 144 likes
Facebook AI: #CVPR2020: Today at 12pm, we're discussing PIRL, which was one of the first techniques to outperform supervised learning on object detection. https://arxiv.org/pdf/1912.01991.pdf
3 replies, 142 likes
MONTREAL.AI: Self-Supervised Learning of Pretext-Invariant Representations
Ishan Misra, Laurens van der Maaten: https://arxiv.org/abs/1912.01991
#ArtificialIntelligence #DeepLearning #UnsupervisedLearning https://t.co/DIUc9yOMZY
0 replies, 8 likes
Alf @ 𝑣ʳIPS: The revolution, which is unsupervised, has started! Join us! 👊🏻
1 replies, 8 likes
Found on Dec 10 2019 at https://arxiv.org/pdf/1912.01991.pdf