Papers of the day   All papers

Learning Representations by Maximizing Mutual Information Across Views

Comments

Jul 09 2019 Devon Hjelm

Latest version of #AMDIM, a self-supervised method that gets 68% unsupervised Imagenet+linear probe classification, far outstripping prior and recent results by 7%+ with a fraction of the compute https://arxiv.org/abs/1906.00910 Code: https://github.com/Philip-Bachman/amdim-public by @philip_bachman and @wbuchw
2 replies, 242 likes


Jul 15 2019 David Krueger

If you're in ML, you've probably heard about BigBiGAN because of the DM PR machine. But you may not have heard about this paper by @philip_bachman et al. that came 4 days later and crushes their results.
4 replies, 183 likes


Jun 04 2019 Devon Hjelm

Our work extending Deep InfoMax by maximizing MI across views is out! We achieve SOTA on several unsupervised + linear probe benchmarks, including impressive results on Imagenet with a fraction of the computation of competitors https://arxiv.org/abs/1906.00910
0 replies, 68 likes


Sep 03 2019 Devon Hjelm

Our paper on augmented multiscale DIM (AMDIM), the self-supervised model that is SOTA on unsupervised imagenet + supervised linear probe by 7%, was accepted as a poster at NeurIPS https://arxiv.org/abs/1906.00910 Lead by @philip_bachman with Will Buchwalter
1 replies, 27 likes


Jul 17 2019 Daisuke Okanohara

They improved self-supervised representation learning on local Deep InfoMax significantly by using 1) independently-augmented versions of each input 2) multiple scales simultaneously 3) powerful encoder with controlled receptive filed https://arxiv.org/abs/1906.00910
0 replies, 14 likes


Jul 16 2019 Christian Szegedy

Amazing new self-supervised training method for visual models.
0 replies, 12 likes


Jul 26 2019 William Buchwalter

Pre-trained models for AMDIM (https://arxiv.org/abs/1906.00910) are now available: https://github.com/Philip-Bachman/amdim-public/blob/master/README.md#pre-trained-models
0 replies, 9 likes


Jul 28 2019 Adam Trischler

If you'd like to play with some powerful, pre-trained image representations (learned via self-supervised methods and computationally reasonable), check these out!
0 replies, 8 likes


Jul 10 2019 Ankesh Anand

Exciting progress in unsupervised representation learning! Seems like the pendulum has swung towards contrastive methods again.
1 replies, 7 likes


Jul 16 2019 IntuitionMachine

Self-supervised learning by maximizing mutual information between arbitrary features extracted from multiple views of a shared context. https://arxiv.org/abs/1906.00910 #deeplearning #selfsupervised
0 replies, 3 likes


Content