Papers of the day   All papers

Data-Efficient Image Recognition with Contrastive Predictive Coding

Comments

May 23 2019 DeepMind

Deep learning has so far relied on massive amounts of supervision. We show that unsupervised representation learning with Contrastive Predictive Coding greatly improves data-efficiency: http://arxiv.org/abs/1905.09272 By @olivierhenaff @catamorphist @CarlDoersch @arkitus and @avdnoord https://t.co/YN0fJ0gZn1
6 replies, 803 likes


May 23 2019 Ali Eslami

Getting closer to the dream! A network that uses unlabelled images to boost performance when labels are scarce (new SOTA), and it's no worse than ResNet when labels are plentiful. Also: Unsupervised net + just a linear on top outperforms original AlexNet! https://arxiv.org/abs/1905.09272 https://t.co/G8NqoB98xp
5 replies, 402 likes


May 23 2019 Aäron van den Oord

Excited to share our latest results on Contrastive Predictive Coding! -A linear classifier on CPC features yield 61% ACC, outperforming the original AlexNet result with unsupervised learning. -New state of the art in semi-supervised learning w 1% labels. https://arxiv.org/abs/1905.09272 https://t.co/01pWjkTxsW
3 replies, 369 likes


Dec 09 2019 Aravind Srinivas

Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7) https://arxiv.org/abs/1905.09272 https://t.co/N79Ro4QuyO
2 replies, 258 likes


Dec 09 2019 olivierhenaff

Very happy to share our latest unsupervised representation learning work! In addition to SOTA linear classification, we beat supervised networks on ImageNet with 2-5x less labels and transfer to PASCAL detection better than supervised pre-training. http://arxiv.org/abs/1905.09272 https://t.co/c0wNEcKoKZ
1 replies, 108 likes


Dec 09 2019 Ali Eslami

Exciting updated results for self-supervised representation learning on ImageNet: - 71.5% top-1 with a *linear* classifier - 77.9% top-5 with only *1%* of the labels - 76.6 mAP when transferred to PASCAL VOC-07 (better than *fully-supervised's* 74.7 mAP) https://arxiv.org/abs/1905.09272 https://t.co/uq514NiI9B
1 replies, 74 likes


May 24 2019 Kriegeskorte Lab

unsupervised learning is the missing cake under the icing of supervision. zhuang, zhai & @dyamins describe a method for deep learning of locally clustered embeddings (https://arxiv.org/pdf/1903.12355.pdf) and henaff, al., et @avdnoord use spatial contrastive predictive coding...
1 replies, 15 likes


Dec 09 2019 Jeffrey De Fauw

Beating previous state of the art in self-supervised learning for ImageNet by almost 3% absolute with less parameters (71.5% vs 68.6% top1). Extensive results for data-efficient learning on both ImageNet and Pascal VOC in the updated https://arxiv.org/abs/1905.09272 https://t.co/YMUxofftG1
0 replies, 8 likes


May 24 2019 Daisuke Okanohara

They improve the contrastive predictive coding by using 1) a larger network 2) bidirectional prediction 3) data augmentation (color dropping, random flip, jitter), and achieving new SOTA of semi-sup and even frozen features are competitive to fine-tuning https://arxiv.org/abs/1905.09272
0 replies, 8 likes


May 27 2019 Heuritech Research

"Data-Efficient Image Recognition with Contrastive Predictive Coding": learns to distinguish different patches of the current image among negatives patches. Amazing performance that scale well with the number of labeled examples! #DeepLearning http://arxiv.org/abs/1905.09272 https://t.co/OX0EAWf0zn
0 replies, 6 likes


Dec 09 2019 Carl Doersch

The self-supervised dream is slowly coming true
0 replies, 5 likes


May 31 2019 l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴

Wow, @DeepMindAI breaks records at image classification, learning the class after only 13 examples per class! Very data efficient. Getting very close to human babies. https://arxiv.org/pdf/1905.09272.pdf
1 replies, 2 likes


May 31 2019 Kirill Dubovikov

DeepMind's self-supervised CNN achieves AlexNets accuracy with only 13 images per class https://arxiv.org/pdf/1905.09272.pdf
0 replies, 2 likes


May 23 2019 Amr Farahat

This looks great!
0 replies, 1 likes


May 23 2019 TAMART

The unsupervised revolution has begun?
0 replies, 1 likes


Dec 09 2019 Jeffrey De Fauw

Beating previous state of the art in self-supervised learning for ImageNet by almost 3% absolute with less parameters (71.5% vs 68.6% top1). Extensive results for data-efficient learning on both ImageNet and Pascal VOC in the updated https://arxiv.org/abs/1905.09272 https://t.co/jksRiVP1y7
0 replies, 1 likes


Jan 01 2020 Evgenii Zheltonozhskii

@zacharylipton https://arxiv.org/abs/1905.09272 by @avdnoord -- I really fascinated with the progress of self-supervised learning https://arxiv.org/abs/1910.13038 by @hardmaru https://arxiv.org/abs/1906.00207 even though I have not finished reading it yet
0 replies, 1 likes


Jun 03 2019 Andrew Cutler

Pretty cool paper. You need less training data if you encourage representations of one part of an image to contain information about (unseen) other parts of the image. https://arxiv.org/pdf/1905.09272.pdf
1 replies, 0 likes


Content