Papers of the day   All papers

A Simple Framework for Contrastive Learning of Visual Representations

Comments

Geoffrey Hinton: Unsupervised learning of representations is beginning to work quite well without requiring reconstruction.

14 replies, 2157 likes


Ting Chen: Introducing SimCLR: a Simple framework for Contrastive Learning of Representations. SimCLR advances previous SOTA in self-supervised and semi-supervised learning on ImageNet by 7-10% (see next). https://arxiv.org/abs/2002.05709 Joint work with @skornblith @mo_norouzi @geoffreyhinton. https://t.co/9nJ6sod91a

17 replies, 1063 likes


Oriol Vinyals: Rapid unsupervised learning progress thanks to contrastive losses, approaching supervised learning! -40% Multitask SSL https://arxiv.org/abs/1708.07860 (2017) -50% CPC https://arxiv.org/abs/1807.03748 (2018) -70% AMDIM/MOCO/CPCv2/etc (2019) -76.5% SimCLR https://arxiv.org/abs/2002.05709 (2020, so far) https://t.co/z1Q1yPi9pO

5 replies, 683 likes


Jeff Dean: Some large advances in state of the art in imagenet accuracy for self- and semi-supervised learning from @GoogleAI researchers @tingchenai, @skornblith, @mo_norouzi and @geoffreyhinton. Nice!

5 replies, 307 likes


Alfredo Canziani: Wowow! 😍❤️ The unsupervised revolution is taking off! 🤩🥳 He (Nov 2019) MoCo Hénaff (Dec 2019) CPC Misra (Dec 2019) PIRL Sohn (Jan 2020) FixMatch Chen (Feb 2020) SimCLR

3 replies, 298 likes


hardmaru: A Keras implementation of “A Simple Framework for Contrastive Learning of Visual Representations” (SimCLR) https://github.com/mwdhont/SimCLRv1-keras-tensorflow https://arxiv.org/abs/2002.05709 https://t.co/AjfBl8cwxF

1 replies, 235 likes


Simon Kornblith: We outperform previous methods for self-supervised learning from images by a substantial margin, with very few tricks.

2 replies, 163 likes


Mohammad Norouzi: SimCLR is a simple and clear way to learn visual representations without class labels! When SimCLR representations are fine-tuned on 1% of the ImageNet labels, they achieve 85.8% Top-5 accuracy, outperforming AlexNet with 100x fewer labels. http://arxiv.org/abs/2002.05709 https://t.co/fBz8OdlDiw

1 replies, 101 likes


David Page: Simple setup + attention to details -> sota self-supervised reps! LARS -> large batches -> no need for memory bank of -ve examples Random crops + color aug (to prevent hist cheating) -> no need for special arch Projn head for contrastive loss -> hidden reps preserve info

1 replies, 89 likes


Aravind Srinivas: Self-supervised learning explosion 💥

1 replies, 60 likes


Shane Gu 顾世翔: The crux of self-supervised (unsupervised) learning lies in the art of defining auxiliary prediction tasks. BERT, SimCLR etc revolutionized SOA results by solving diverse predictive tasks generated automatically. https://arxiv.org/abs/2002.05709

0 replies, 38 likes


Theodore Galanos: @Thom_Wolf I'm not sure this pertains only to CV, it doesn't really, but the self-supervised 'revolution' has been fascinating to watch. Too many papers to have here but some highlights are: https://arxiv.org/abs/1911.05722, https://arxiv.org/abs/2002.05709, https://arxiv.org/abs/2004.11362, https://arxiv.org/abs/2006.10029

0 replies, 27 likes


Curt Langlotz: An ingenious self-supervised learning method from @geoffreyhinton lab matches SUPERVISED learning performance on ImageNet. It learns representations by maximizing agreement between differently augmented views of the same image. https://arxiv.org/abs/2002.05709 HT @yuhaozhangx https://t.co/R8fkBMtLp9

0 replies, 13 likes


Shane Gu 顾世翔: A nice summary on self-supervised learning https://amitness.com/2020/02/illustrated-self-supervised-learning/ In RL, a major example is relabeling: HER, TDM, LfP, Learning to act by prediction etc. Relabeling adds substantially more learning signals (i.e. cherries @ylecun) to classic RL setting https://arxiv.org/abs/1802.09081 https://t.co/rbtjEcmHes

0 replies, 11 likes


Daisuke Okanohara: For better contrastive learning of visual representation, we should use 1) composition of data augmentation (esp. crop and color distortions) 2) nonlinear projection head before contrastive loss 3) cross entropy loss with adjustable temperature https://arxiv.org/abs/2002.05709

0 replies, 9 likes


Pierre Richemond: It’s confirmed : the revolution will be self-supervised.

1 replies, 8 likes


Claudio M: Unsupervises representations are here to stay! Love the transfer learning analysis in this work!

0 replies, 7 likes


reza mahmoudi: new paper by Geoffrey Hinton A Simple Framework for Contrastive Learning of Visual Representations Ting Chen. @mo_norouzi , @geoffreyhinton https://arxiv.org/abs/2002.05709 #DeepLearning #MachineLearning #ArtificialIntelligence #AI

0 replies, 4 likes


🏁Euge🏴‍☠️: Unsupervised pre-training with some nifty augmentation tricks that make it work wonders: https://arxiv.org/abs/2002.05709

0 replies, 2 likes


hardmaru: Original tweet thread summary of the SimCLR paper: https://twitter.com/tingchenai/status/1228337240708874241

0 replies, 2 likes


arXiv CS-CV: A Simple Framework for Contrastive Learning of Visual Representations http://arxiv.org/abs/2002.05709

0 replies, 1 likes


@reiver ⊼ (Charles Iliya Krempeaux): 《A Simple Framework for Contrastive Learning of Visual Representations》 by Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton https://arxiv.org/abs/2002.05709 (machine learning)

0 replies, 1 likes


Vivek Natarajan: This is really exciting! Vision's BERT moment is closer than ever.

0 replies, 1 likes


akira: https://arxiv.org/pdf/2002.05709.pdf G.Hinton's work about unsupervised representation learning. They got competitive score with supervised learning with three strategies: 'strong data augmentation', 'larger network structure', and 'taking loss after additional nonlinear transformation'. https://t.co/8EJdHRug18

0 replies, 1 likes


Jim Bohnslav: A simple framework for contrastive learning of visual representations: https://arxiv.org/abs/2002.05709 ~70% top-1 on Imagenet with a ResNet50, only using linear classification on top of learned self-supervised representations! https://t.co/FAxLfGyagU

1 replies, 0 likes


Brundage Bot: A Simple Framework for Contrastive Learning of Visual Representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton http://arxiv.org/abs/2002.05709

1 replies, 0 likes


Content

Found on Feb 14 2020 at https://arxiv.org/pdf/2002.05709.pdf

PDF content of a computer science paper: A Simple Framework for Contrastive Learning of Visual Representations