Papers of the day   All papers

Momentum Contrast for Unsupervised Visual Representation Learning

Comments

Dec 10 2019 Yann LeCun

There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning: MoCo: https://arxiv.org/abs/1911.05722 PIRL: https://arxiv.org/abs/1912.01991 And this, below. All three use some form of Siamese net.
3 replies, 1093 likes


Dec 09 2019 Aäron van den Oord

Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection https://arxiv.org/abs/1905.09272v2 https://t.co/cciL5Db73x
4 replies, 778 likes


Nov 16 2019 Olivier Grisel

Very impressive results: Momentum Contrast for Unsupervised Visual Representation Learning by Kaiming He et al. https://arxiv.org/abs/1911.05722 Better than supervised ImageNet pre-training for transfer learning on object detection (eg COCO). https://t.co/ymdh7f6pGJ
1 replies, 295 likes


Nov 17 2019 mat kelcey

a new paper from kaiming he is always exciting & this one is particularly interesting! "Momentum Contrast for Unsupervised Visual Representation Learning" https://arxiv.org/abs/1911.05722 i definitely think contrastive learning is _the_ approach for unsupervised representation learning..
3 replies, 155 likes


Dec 09 2019 Aravind Srinivas

Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7) https://arxiv.org/abs/1905.09272 https://t.co/N79Ro4QuyO
1 replies, 146 likes


Nov 19 2019 Jigar Doshi

Using contrastive learning (think siamese nets) with an interesting trick of momentum-based update of the (two) models leads to 'very good' representation for downstream supervised tasks. By Kaiming He et al. https://arxiv.org/pdf/1911.05722.pdf https://t.co/Xr8xCaFrgd
0 replies, 33 likes


Nov 25 2019 Keisuke Fujimoto

I implemented "Momentum Contrast for Unsupervised Visual Representation Learning". https://arxiv.org/abs/1911.05722 https://github.com/peisuke/MomentumContrast.pytorch
0 replies, 19 likes


Nov 18 2019 Daisuke Okanohara

MoCo improves unsupervised visual representation learning using contrastive loss, surpassing supervised pretraining. 1) sampling negative examples from a large queue of previous encoded examples 2) momentum update of the key encoder with query encoder. https://arxiv.org/abs/1911.05722
0 replies, 13 likes


Nov 14 2019 Evan Racah

"You either die a positive pair or live long enough to see yourself sampled from the queue as a negative pair" TLDR of "Momentum Contrast for Unsupervised Visual Representation Learning" by Kaiming He, et al. https://arxiv.org/abs/1911.05722
0 replies, 8 likes


Dec 10 2019 Alf @ 𝑣ʳIPS

The revolution, which is unsupervised, has started! Join us! 👊🏻
1 replies, 8 likes


Nov 17 2019 arXiv CS-CV

Momentum Contrast for Unsupervised Visual Representation Learning http://arxiv.org/abs/1911.05722
0 replies, 6 likes


Nov 16 2019 Christian S. Perone

What a clever mechanism for decoupling the dict size from the batch size.
0 replies, 5 likes


Nov 27 2019 akira

https://arxiv.org/abs/1911.05722 Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network. It can improve the accuracy of object detection when used as pre-training. https://t.co/dgmttcTieK
0 replies, 1 likes


Nov 15 2019 andrea panizza

“MoCo can outperform its supervised pre-training counterpart [..]. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks”. Amazing! How comes #ComputerVision Twitter is not retweeting this more?
0 replies, 1 likes


Nov 14 2019 Brundage Bot

Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick http://arxiv.org/abs/1911.05722
1 replies, 0 likes


Content