Papers of the day   All papers

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

Comments

May 29 2019 Quoc Le

EfficientNets: a family of more efficient & accurate image classification models. Found by architecture search and scaled up by one weird trick. Link: https://arxiv.org/abs/1905.11946 Github: https://bit.ly/30UojnC Blog: https://bit.ly/2JKY3qt https://t.co/RIwvhCBA8x
24 replies, 2123 likes


Jun 02 2019 hardmaru

The networks found in the EfficientNet paper and their pretrained weights have been implemented in @PyTorch three days after the paper was posted on http://arxiv.org https://github.com/lukemelas/EfficientNet-PyTorch https://twitter.com/quocleix/status/1133833673134862337
3 replies, 381 likes


May 30 2019 Jeff Dean

New work by Mingxing Tan and @quocleix of @GoogleAI on automatically designing much more efficient-and-highly-accurate computer vision models. This will enable more sophisticated uses of computer vision on mobile devices, et al. Graph below highlights cost v. accuracy tradeoff.
5 replies, 267 likes


Jul 30 2019 Quoc Le

For more context about EfficientNet, check out my earlier tweet: https://twitter.com/quocleix/status/1133833673134862337
0 replies, 48 likes


Jul 01 2019 Rachael Tatman

Time to pick the next @kaggle reading group paper! Your options: - XLNet: Generalized Autoregressive Pretraining for NLU https://arxiv.org/pdf/1906.08237.pdf - Defending Against Neural Fake News (Grover) https://arxiv.org/abs/1905.12616 - EfficientNet: Model Scaling for CNNs https://arxiv.org/abs/1905.11946
5 replies, 44 likes


Aug 11 2019 Carlo Lepelaars

@lavanyaai To be frank: 1. Use small batch sizes (https://arxiv.org/pdf/1804.07612.pdf) 2. ReLU's are ancient. Use ELU or GELU as activations. Leaky ReLU's if the inference time has to be fast.(https://arxiv.org/pdf/1511.07289.pdf) (https://arxiv.org/pdf/1606.08415.pdf) 3. EfficientNet is awesome! (https://arxiv.org/pdf/1905.11946.pdf)
2 replies, 31 likes


May 29 2019 Mingxing Tan

@karpathy @quocleix Hi Andrej, here you go (Figure 8 in arxiv paper: https://arxiv.org/pdf/1905.11946.pdf). https://t.co/YKzTnG3f5Z
1 replies, 31 likes


Jul 31 2019 Martin Görner

Here is a bit of context to understand this architecture: inverted residual blocks from MobileNetV2 are discussed here: https://towardsdatascience.com/mobilenetv2-inverted-residuals-and-linear-bottlenecks-8a4362f4ffd5 and...
2 replies, 30 likes


May 30 2019 DataScienceNigeria

WoW EfficientNets by @quocleix etal of @GoogleAI! Awesome way to scale up CNNs in a more structured manner to achieve much better accuracy &efficiency Uses a new compound model scaling method & leverages advances in #AutoML to improve NN scaling. Paper: https://arxiv.org/abs/1905.11946 https://t.co/FXUFQwUc6q
0 replies, 28 likes


May 29 2019 Andrew Davison

Impressive performance scaling of CNNs.
0 replies, 25 likes


Jun 11 2019 Joseph Paul Cohen

EfficientNet: 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet https://arxiv.org/abs/1905.11946
0 replies, 25 likes


May 30 2019 Jeff Dean

New work by Mingxing Tan and @quocvle of @GoogleAI on automatically designing much more efficient-and-highly-accurate computer vision models. This will enable more sophisticated uses of computer vision on mobile devices, et al. Graph below highlights cost v. accuracy tradeoff.
1 replies, 21 likes


May 29 2019 François Fleuret

Wow. TL;DR: Do architecture search to make a "small" base network, and generate larger versions by scaling nb channels, width and depth in proportion.
0 replies, 17 likes


May 30 2019 Aleksei Statkevich

Impressive results for automated #NeuralNetwork architecture search. Are we close to a point where custom human-designed networks become a thing of the past? #AI #ArtificialIntelligence #AutoML #NN #DeepLearning #DL #MachineLearning #ML
0 replies, 13 likes


May 30 2019 Hacker News

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks https://arxiv.org/abs/1905.11946
1 replies, 12 likes


May 29 2019 Christian Szegedy

Amazing stuff!
1 replies, 9 likes


Jun 04 2019 BioDecoded

EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling | Google AI Blog https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html https://arxiv.org/abs/1905.11946 #DeepLearning https://t.co/ifUPnNbfuA
0 replies, 6 likes


Jun 04 2019 Daisuke Okanohara

When scaling up CNN, depth, width, and resolution should be jointly adjusted, but its search space is too large. They propose to use one coefficient to uniformly scale all these parameters and also propose a new network (EfficientNet) on this scaling. https://arxiv.org/abs/1905.11946
0 replies, 5 likes


May 30 2019 Pierre Ouannes

Get way better top-1 Imagenet accuracy with this one weird trick.
0 replies, 4 likes


May 29 2019 Aran Komatsuzaki

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks https://arxiv.org/abs/1905.11946 Behold! The most pleasing curve a man would ever witness! https://t.co/MssfL0wsUP
0 replies, 4 likes


May 30 2019 Raym Geis

Algos just keep getting trickier and better. The more I learn, the behinder I get.
0 replies, 3 likes


Oct 18 2019 Yassine Alouini

Kaggle Reading Group: EfficientNet | Kaggle https://youtu.be/4U2WO8ObGGU The paper is here: https://arxiv.org/abs/1905.11946 @kaggle
0 replies, 3 likes


May 30 2019 bbabenko

really impressive result
0 replies, 2 likes


May 29 2019 Matthew Teschke

"EfficientNets achieved state-of-the-art accuracy in 5 out of the 8 datasets, such as CIFAR-100 (91.7%) and Flowers (98.8%), with an order of magnitude fewer parameters (up to 21x parameter reduction), suggesting that our EfficientNets also transfer well."
0 replies, 2 likes


Jun 01 2019 Fabio Galasso

Another score for neural architecture search, using pieces from MobileNets. It's quite nice to see an EfficientNet performing with similar FLOPs as ResNet-50, but with +6.3% ImageNet top-1 accuracy.
0 replies, 2 likes


May 30 2019 cs.LG Papers

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Mingxing Tan and Quoc V. Le http://arxiv.org/abs/1905.11946
1 replies, 1 likes


May 30 2019 Artificial Now

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks - https://arxiv.org/abs/1905.11946
0 replies, 1 likes


May 31 2019 l̴o̴o̴p̴u̴l̴e̴a̴s̴a̴

Black magic from Google, again They made neural nets, called EfficientNet, that are state of the art and are 10x faster Paper: https://arxiv.org/abs/1905.11946 Blog: https://ai.googleblog.com/2019/05/efficientnet-improving-accuracy-and.html Article: https://venturebeat.com/2019/05/29/googles-efficientnets-is-faster-at-analyzing-images-than-other-ai-models/ https://t.co/OB1co4DPJT
1 replies, 1 likes


May 29 2019 Maxim Bonnaerens

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (https://arxiv.org/abs/1905.11946) Recent work on balancing depth, width and resolution. It uses compound scaling to uniformly scale the network. Baseline similar to MnasNet but optimizes FLOPS instead of latency. https://t.co/jQ7EQ9ULqd
0 replies, 1 likes


Jun 03 2019 Underfox

Google researchers have proposed a simple and highly effective compound scaling method, which enables easily scale up a baseline ConvNet to any target resource constraints in a more principled way, while maintaining model efficiency. #MachineLearning https://arxiv.org/pdf/1905.11946.pdf https://t.co/fze5emSBk5
0 replies, 1 likes


Content