Papers of the day   All papers

Self-Supervised Learning of Pretext-Invariant Representations

Comments

Yann LeCun: There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning: MoCo: https://arxiv.org/abs/1911.05722 PIRL: https://arxiv.org/abs/1912.01991 And this, below. All three use some form of Siamese net.

4 replies, 1087 likes


Yann LeCun: SSL FTW! Pretext-Invariant Representation Learning: a self-supervised method based on Siamese nets for visual feature learning from FAIR. Beats supervised pre-training & all previous SSL methods on ImageNet, VOC-07-12, etc. https://arxiv.org/abs/1912.01991

2 replies, 144 likes


Facebook AI: #CVPR2020: Today at 12pm, we're discussing PIRL, which was one of the first techniques to outperform supervised learning on object detection. https://arxiv.org/pdf/1912.01991.pdf

3 replies, 142 likes


MONTREAL.AI: Self-Supervised Learning of Pretext-Invariant Representations Ishan Misra, Laurens van der Maaten: https://arxiv.org/abs/1912.01991 #ArtificialIntelligence #DeepLearning #UnsupervisedLearning https://t.co/DIUc9yOMZY

0 replies, 8 likes


Alf @ 𝑣ʳIPS: The revolution, which is unsupervised, has started! Join us! 👊🏻

1 replies, 8 likes


Content

Found on Dec 10 2019 at https://arxiv.org/pdf/1912.01991.pdf

PDF content of a computer science paper: Self-Supervised Learning of Pretext-Invariant Representations