Papers of the day   All papers

Sparse Networks from Scratch: Faster Training without Losing Performance


Jul 11 2019 Tim Dettmers

My new work with @LukeZettlemoyer on accelerated training of sparse networks from random weights to dense performance levels — no retraining required! Paper: Blog post: Code:
3 replies, 293 likes

Sep 13 2019 Tim Dettmers

The v1.0 Release of my sparse learning library includes a lot of bugfixes and more rigorous ImageNet baselines: Our paper on accelerating training through sparsity was also updated with dense equivalent results and new analyses:
0 replies, 78 likes

Aug 09 2019 ML Review

Sparse Networks from Scratch: Faster Training without Losing Performance By @Tim_Dettmers Finds "winning lottery tickets" – sparse configurations with 20% weights and similar performance. SoTA on MNIST, CIFAR-10, and ImageNet-2012 among sparse methods
0 replies, 67 likes

Aug 08 2019 Tim Dettmers

Soon, I will also update our Sparse Networks from Scratch paper ( with an analysis for momentum and pruning rate parameters showing that sparse momentum is highly stable and easy to use. Also included: A study on dense performance equivalents (see below).
2 replies, 56 likes

Jul 12 2019 Brian

Jeff Hawkins and his group have been working on making sparse networks, also based on inspiration from neuroscience. Have you read his 1000 brains work?
1 replies, 8 likes

Jul 14 2019 Thaddeus Preston

While we complain about #Bond25 and a Government created #BorderCrisis there are others working on slowing building the future. #AI #MachineLearning
0 replies, 1 likes