Papers of the day   All papers

Momentum Contrast for Unsupervised Visual Representation Learning

Comments

Nov 16 2019 Olivier Grisel

Very impressive results: Momentum Contrast for Unsupervised Visual Representation Learning by Kaiming He et al. https://arxiv.org/abs/1911.05722 Better than supervised ImageNet pre-training for transfer learning on object detection (eg COCO). https://t.co/ymdh7f6pGJ
1 replies, 294 likes


Nov 17 2019 mat kelcey

a new paper from kaiming he is always exciting & this one is particularly interesting! "Momentum Contrast for Unsupervised Visual Representation Learning" https://arxiv.org/abs/1911.05722 i definitely think contrastive learning is _the_ approach for unsupervised representation learning..
3 replies, 155 likes


Nov 19 2019 Jigar Doshi

Using contrastive learning (think siamese nets) with an interesting trick of momentum-based update of the (two) models leads to 'very good' representation for downstream supervised tasks. By Kaiming He et al. https://arxiv.org/pdf/1911.05722.pdf https://t.co/Xr8xCaFrgd
0 replies, 33 likes


Nov 25 2019 Keisuke Fujimoto

I implemented "Momentum Contrast for Unsupervised Visual Representation Learning". https://arxiv.org/abs/1911.05722 https://github.com/peisuke/MomentumContrast.pytorch
0 replies, 16 likes


Nov 18 2019 Daisuke Okanohara

MoCo improves unsupervised visual representation learning using contrastive loss, surpassing supervised pretraining. 1) sampling negative examples from a large queue of previous encoded examples 2) momentum update of the key encoder with query encoder. https://arxiv.org/abs/1911.05722
0 replies, 13 likes


Nov 14 2019 Evan Racah

"You either die a positive pair or live long enough to see yourself sampled from the queue as a negative pair" TLDR of "Momentum Contrast for Unsupervised Visual Representation Learning" by Kaiming He, et al. https://arxiv.org/abs/1911.05722
0 replies, 8 likes


Nov 17 2019 arXiv CS-CV

Momentum Contrast for Unsupervised Visual Representation Learning http://arxiv.org/abs/1911.05722
0 replies, 6 likes


Nov 16 2019 Christian S. Perone

What a clever mechanism for decoupling the dict size from the batch size.
0 replies, 5 likes


Nov 15 2019 andrea panizza

“MoCo can outperform its supervised pre-training counterpart [..]. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks”. Amazing! How comes #ComputerVision Twitter is not retweeting this more?
0 replies, 1 likes


Nov 27 2019 akira

https://arxiv.org/abs/1911.05722 Unsupervised representation learning, MoCo in which performs like metric learning with a dictionary. the key network gradually updating to be closer to query network. It can improve the accuracy of object detection when used as pre-training. https://t.co/dgmttcTieK
0 replies, 1 likes


Nov 14 2019 Brundage Bot

Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick http://arxiv.org/abs/1911.05722
1 replies, 0 likes


Content