Papers of the day   All papers

Fast Sparse ConvNets

Comments

DeepMind: “Fast Sparse ConvNets”, a collaboration w/ @GoogleAI [https://arxiv.org/abs/1911.09723], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents. https://t.co/poDKMzfA4u

2 replies, 485 likes


Marat Dukhan: Today at @CVPR we are presenting the joint work (https://arxiv.org/abs/1911.09723) with colleagues from @DeepMind and @GoogleAI that proves sparsity in neural network weights practically useful for accelerating ConvNet inference on general-purpose processors. https://t.co/OKdzNIiV0E

1 replies, 47 likes


arxiv: Fast Sparse ConvNets. http://arxiv.org/abs/1911.09723 https://t.co/WJ7fybJZ9a

0 replies, 37 likes


Utku: 6) A common concern is: “But sparse networks are hard to accelerate!”. Check out Fast Sparse ConvNets (📃 https://arxiv.org/pdf/1911.09723) which achieves large speedups on mobile CPUs for inference with sparse MobileNets and stay tuned for more! https://t.co/OIyHhD9D7b

1 replies, 18 likes


Utku: Sparse networks can be practical and this is probably just a start

0 replies, 17 likes


Erich Elsen: We've been working hard to make sparse neural networks easy to take advantage of and the XNNPACK sparse delegate is finally ready! @Tgale96 @MaratDukhan

0 replies, 5 likes


Content

Found on Nov 26 2019 at https://arxiv.org/pdf/1911.09723.pdf

PDF content of a computer science paper: Fast Sparse ConvNets