Papers of the day   All papers

Momentum Contrast for Unsupervised Visual Representation Learning

Comments

Yann LeCun: There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning: MoCo: https://arxiv.org/abs/1911.05722 PIRL: https://arxiv.org/abs/1912.01991 And this, below. All three use some form of Siamese net.

4 replies, 1087 likes


Olivier Grisel: Very impressive results: Momentum Contrast for Unsupervised Visual Representation Learning by Kaiming He et al. https://arxiv.org/abs/1911.05722 Better than supervised ImageNet pre-training for transfer learning on object detection (eg COCO). https://t.co/ymdh7f6pGJ

1 replies, 295 likes


mat kelcey: a new paper from kaiming he is always exciting & this one is particularly interesting! "Momentum Contrast for Unsupervised Visual Representation Learning" https://arxiv.org/abs/1911.05722 i definitely think contrastive learning is _the_ approach for unsupervised representation learning..

3 replies, 155 likes


Jigar Doshi: Using contrastive learning (think siamese nets) with an interesting trick of momentum-based update of the (two) models leads to 'very good' representation for downstream supervised tasks. By Kaiming He et al. https://arxiv.org/pdf/1911.05722.pdf https://t.co/Xr8xCaFrgd

0 replies, 33 likes


Theodore Galanos: @Thom_Wolf I'm not sure this pertains only to CV, it doesn't really, but the self-supervised 'revolution' has been fascinating to watch. Too many papers to have here but some highlights are: https://arxiv.org/abs/1911.05722, https://arxiv.org/abs/2002.05709, https://arxiv.org/abs/2004.11362, https://arxiv.org/abs/2006.10029

0 replies, 27 likes


Keisuke Fujimoto: I implemented "Momentum Contrast for Unsupervised Visual Representation Learning". https://arxiv.org/abs/1911.05722 https://github.com/peisuke/MomentumContrast.pytorch

0 replies, 19 likes


Daisuke Okanohara: MoCo improves unsupervised visual representation learning using contrastive loss, surpassing supervised pretraining. 1) sampling negative examples from a large queue of previous encoded examples 2) momentum update of the key encoder with query encoder. https://arxiv.org/abs/1911.05722

0 replies, 13 likes


Evan Racah: "You either die a positive pair or live long enough to see yourself sampled from the queue as a negative pair" TLDR of "Momentum Contrast for Unsupervised Visual Representation Learning" by Kaiming He, et al. https://arxiv.org/abs/1911.05722

0 replies, 11 likes


Alf @ 𝑣ʳIPS: The revolution, which is unsupervised, has started! Join us! 👊🏻

1 replies, 8 likes


akira: https://arxiv.org/abs/1911.05722 Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network. It can improve the accuracy of object detection when used as pre-training. https://t.co/bshEFTMalm

0 replies, 7 likes


arXiv CS-CV: Momentum Contrast for Unsupervised Visual Representation Learning http://arxiv.org/abs/1911.05722

0 replies, 6 likes


Christian S. Perone: What a clever mechanism for decoupling the dict size from the batch size.

0 replies, 5 likes


Facebook AI: Tomorrow, we'll present MoCo at 9:15am (https://arxiv.org/pdf/1911.05722.pdf) MoCo similarly outperforms the standard ImageNet supervised pretraining on challenging benchmark tasks such as object detection.

0 replies, 5 likes


andrea panizza: “MoCo can outperform its supervised pre-training counterpart [..]. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks”. Amazing! How comes #ComputerVision Twitter is not retweeting this more?

0 replies, 1 likes


akira: https://arxiv.org/abs/1911.05722 Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network. It can improve the accuracy of object detection when used as pre-training. https://t.co/dgmttcTieK

0 replies, 1 likes


Brundage Bot: Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick http://arxiv.org/abs/1911.05722

1 replies, 0 likes


Content

Found on Dec 10 2019 at https://arxiv.org/pdf/1911.05722.pdf

PDF content of a computer science paper: Momentum Contrast for Unsupervised Visual Representation Learning