Papers of the day   All papers

Fast Sparse ConvNets


DeepMind: “Fast Sparse ConvNets”, a collaboration w/ @GoogleAI [], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents.

2 replies, 485 likes

Marat Dukhan: Today at @CVPR we are presenting the joint work ( with colleagues from @DeepMind and @GoogleAI that proves sparsity in neural network weights practically useful for accelerating ConvNet inference on general-purpose processors.

1 replies, 47 likes

arxiv: Fast Sparse ConvNets.

0 replies, 37 likes

Utku: 6) A common concern is: “But sparse networks are hard to accelerate!”. Check out Fast Sparse ConvNets (📃 which achieves large speedups on mobile CPUs for inference with sparse MobileNets and stay tuned for more!

1 replies, 18 likes

Utku: Sparse networks can be practical and this is probably just a start

0 replies, 17 likes

Erich Elsen: We've been working hard to make sparse neural networks easy to take advantage of and the XNNPACK sparse delegate is finally ready! @Tgale96 @MaratDukhan

0 replies, 5 likes


Found on Nov 26 2019 at

PDF content of a computer science paper: Fast Sparse ConvNets