Papers of the day   All papers

A Simple Framework for Contrastive Learning of Visual Representations


Geoffrey Hinton: Unsupervised learning of representations is beginning to work quite well without requiring reconstruction.

14 replies, 2157 likes

Ting Chen: Introducing SimCLR: a Simple framework for Contrastive Learning of Representations. SimCLR advances previous SOTA in self-supervised and semi-supervised learning on ImageNet by 7-10% (see next). Joint work with @skornblith @mo_norouzi @geoffreyhinton.

17 replies, 1063 likes

Oriol Vinyals: Rapid unsupervised learning progress thanks to contrastive losses, approaching supervised learning! -40% Multitask SSL (2017) -50% CPC (2018) -70% AMDIM/MOCO/CPCv2/etc (2019) -76.5% SimCLR (2020, so far)

5 replies, 683 likes

Jeff Dean: Some large advances in state of the art in imagenet accuracy for self- and semi-supervised learning from @GoogleAI researchers @tingchenai, @skornblith, @mo_norouzi and @geoffreyhinton. Nice!

5 replies, 307 likes

Alfredo Canziani: Wowow! 😍❤️ The unsupervised revolution is taking off! 🤩🥳 He (Nov 2019) MoCo Hénaff (Dec 2019) CPC Misra (Dec 2019) PIRL Sohn (Jan 2020) FixMatch Chen (Feb 2020) SimCLR

3 replies, 298 likes

hardmaru: A Keras implementation of “A Simple Framework for Contrastive Learning of Visual Representations” (SimCLR)

1 replies, 235 likes

Simon Kornblith: We outperform previous methods for self-supervised learning from images by a substantial margin, with very few tricks.

2 replies, 163 likes

Mohammad Norouzi: SimCLR is a simple and clear way to learn visual representations without class labels! When SimCLR representations are fine-tuned on 1% of the ImageNet labels, they achieve 85.8% Top-5 accuracy, outperforming AlexNet with 100x fewer labels.

1 replies, 101 likes

David Page: Simple setup + attention to details -> sota self-supervised reps! LARS -> large batches -> no need for memory bank of -ve examples Random crops + color aug (to prevent hist cheating) -> no need for special arch Projn head for contrastive loss -> hidden reps preserve info

1 replies, 89 likes

Aravind Srinivas: Self-supervised learning explosion 💥

1 replies, 60 likes

Shane Gu 顾世翔: The crux of self-supervised (unsupervised) learning lies in the art of defining auxiliary prediction tasks. BERT, SimCLR etc revolutionized SOA results by solving diverse predictive tasks generated automatically.

0 replies, 38 likes

Theodore Galanos: @Thom_Wolf I'm not sure this pertains only to CV, it doesn't really, but the self-supervised 'revolution' has been fascinating to watch. Too many papers to have here but some highlights are:,,,

0 replies, 27 likes

Curt Langlotz: An ingenious self-supervised learning method from @geoffreyhinton lab matches SUPERVISED learning performance on ImageNet. It learns representations by maximizing agreement between differently augmented views of the same image. HT @yuhaozhangx

0 replies, 13 likes

Shane Gu 顾世翔: A nice summary on self-supervised learning In RL, a major example is relabeling: HER, TDM, LfP, Learning to act by prediction etc. Relabeling adds substantially more learning signals (i.e. cherries @ylecun) to classic RL setting

0 replies, 11 likes

Daisuke Okanohara: For better contrastive learning of visual representation, we should use 1) composition of data augmentation (esp. crop and color distortions) 2) nonlinear projection head before contrastive loss 3) cross entropy loss with adjustable temperature

0 replies, 9 likes

Pierre Richemond: It’s confirmed : the revolution will be self-supervised.

1 replies, 8 likes

Claudio M: Unsupervises representations are here to stay! Love the transfer learning analysis in this work!

0 replies, 7 likes

reza mahmoudi: new paper by Geoffrey Hinton A Simple Framework for Contrastive Learning of Visual Representations Ting Chen. @mo_norouzi , @geoffreyhinton #DeepLearning #MachineLearning #ArtificialIntelligence #AI

0 replies, 4 likes

🏁Euge🏴‍☠️: Unsupervised pre-training with some nifty augmentation tricks that make it work wonders:

0 replies, 2 likes

hardmaru: Original tweet thread summary of the SimCLR paper:

0 replies, 2 likes

arXiv CS-CV: A Simple Framework for Contrastive Learning of Visual Representations

0 replies, 1 likes

@reiver ⊼ (Charles Iliya Krempeaux): 《A Simple Framework for Contrastive Learning of Visual Representations》 by Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton (machine learning)

0 replies, 1 likes

Vivek Natarajan: This is really exciting! Vision's BERT moment is closer than ever.

0 replies, 1 likes

akira: G.Hinton's work about unsupervised representation learning. They got competitive score with supervised learning with three strategies: 'strong data augmentation', 'larger network structure', and 'taking loss after additional nonlinear transformation'.

0 replies, 1 likes

Jim Bohnslav: A simple framework for contrastive learning of visual representations: ~70% top-1 on Imagenet with a ResNet50, only using linear classification on top of learned self-supervised representations!

1 replies, 0 likes

Brundage Bot: A Simple Framework for Contrastive Learning of Visual Representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton

1 replies, 0 likes


Found on Feb 14 2020 at

PDF content of a computer science paper: A Simple Framework for Contrastive Learning of Visual Representations