Papers of the day   All papers

Momentum Contrast for Unsupervised Visual Representation Learning


Dec 10 2019 Yann LeCun

There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning: MoCo: PIRL: And this, below. All three use some form of Siamese net.
3 replies, 1093 likes

Dec 09 2019 Aäron van den Oord

Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection
4 replies, 778 likes

Nov 16 2019 Olivier Grisel

Very impressive results: Momentum Contrast for Unsupervised Visual Representation Learning by Kaiming He et al. Better than supervised ImageNet pre-training for transfer learning on object detection (eg COCO).
1 replies, 295 likes

Nov 17 2019 mat kelcey

a new paper from kaiming he is always exciting & this one is particularly interesting! "Momentum Contrast for Unsupervised Visual Representation Learning" i definitely think contrastive learning is _the_ approach for unsupervised representation learning..
3 replies, 155 likes

Dec 09 2019 Aravind Srinivas

Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7)
1 replies, 146 likes

Nov 19 2019 Jigar Doshi

Using contrastive learning (think siamese nets) with an interesting trick of momentum-based update of the (two) models leads to 'very good' representation for downstream supervised tasks. By Kaiming He et al.
0 replies, 33 likes

Nov 25 2019 Keisuke Fujimoto

I implemented "Momentum Contrast for Unsupervised Visual Representation Learning".
0 replies, 19 likes

Nov 18 2019 Daisuke Okanohara

MoCo improves unsupervised visual representation learning using contrastive loss, surpassing supervised pretraining. 1) sampling negative examples from a large queue of previous encoded examples 2) momentum update of the key encoder with query encoder.
0 replies, 13 likes

Nov 14 2019 Evan Racah

"You either die a positive pair or live long enough to see yourself sampled from the queue as a negative pair" TLDR of "Momentum Contrast for Unsupervised Visual Representation Learning" by Kaiming He, et al.
0 replies, 8 likes

Dec 10 2019 Alf @ 𝑣ʳIPS

The revolution, which is unsupervised, has started! Join us! 👊🏻
1 replies, 8 likes

Nov 17 2019 arXiv CS-CV

Momentum Contrast for Unsupervised Visual Representation Learning
0 replies, 6 likes

Nov 16 2019 Christian S. Perone

What a clever mechanism for decoupling the dict size from the batch size.
0 replies, 5 likes

Nov 27 2019 akira Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network. It can improve the accuracy of object detection when used as pre-training.
0 replies, 1 likes

Nov 15 2019 andrea panizza

“MoCo can outperform its supervised pre-training counterpart [..]. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks”. Amazing! How comes #ComputerVision Twitter is not retweeting this more?
0 replies, 1 likes

Nov 14 2019 Brundage Bot

Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick
1 replies, 0 likes