Yann LeCun: There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning:
And this, below.
All three use some form of Siamese net.
3 replies, 1093 likes
Aäron van den Oord: Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection
4 replies, 781 likes
Olivier Grisel: Very impressive results:
Momentum Contrast for Unsupervised Visual Representation Learning by Kaiming He et al. https://arxiv.org/abs/1911.05722
Better than supervised ImageNet pre-training for transfer learning on object detection (eg COCO). https://t.co/ymdh7f6pGJ
1 replies, 295 likes
mat kelcey: a new paper from kaiming he is always exciting & this one is particularly interesting! "Momentum Contrast for Unsupervised Visual Representation Learning" https://arxiv.org/abs/1911.05722 i definitely think contrastive learning is _the_ approach for unsupervised representation learning..
3 replies, 155 likes
Aravind Srinivas: Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7)
1 replies, 146 likes
Jigar Doshi: Using contrastive learning (think siamese nets) with an interesting trick of momentum-based update of the (two) models leads to 'very good' representation for downstream supervised tasks. By Kaiming He et al.
0 replies, 33 likes
Keisuke Fujimoto: I implemented "Momentum Contrast for Unsupervised Visual Representation Learning".
0 replies, 19 likes
Daisuke Okanohara: MoCo improves unsupervised visual representation learning using contrastive loss, surpassing supervised pretraining. 1) sampling negative examples from a large queue of previous encoded examples 2) momentum update of the key encoder with query encoder. https://arxiv.org/abs/1911.05722
0 replies, 13 likes
Evan Racah: "You either die a positive pair or live long enough to see yourself sampled from the queue as a negative pair"
TLDR of "Momentum Contrast for Unsupervised Visual Representation Learning" by Kaiming He, et al. https://arxiv.org/abs/1911.05722
0 replies, 11 likes
Alf @ 𝑣ʳIPS: The revolution, which is unsupervised, has started! Join us! 👊🏻
1 replies, 8 likes
arXiv CS-CV: Momentum Contrast for Unsupervised Visual Representation Learning http://arxiv.org/abs/1911.05722
0 replies, 6 likes
Christian S. Perone: What a clever mechanism for decoupling the dict size from the batch size.
0 replies, 5 likes
andrea panizza: “MoCo can outperform its supervised pre-training counterpart [..]. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks”. Amazing! How comes #ComputerVision Twitter is not retweeting this more?
0 replies, 1 likes
Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network.
It can improve the accuracy of object detection when used as pre-training. https://t.co/dgmttcTieK
0 replies, 1 likes
Brundage Bot: Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick http://arxiv.org/abs/1911.05722
1 replies, 0 likes
Found on Dec 10 2019 at https://arxiv.org/pdf/1911.05722.pdf