Papers of the day   All papers

Pruning neural networks without any data by iteratively conserving synaptic flow

Comments

Hidenori Tanaka: Q. Can we find winning lottery tickets, or sparse trainable deep networks at initialization without ever looking at data? A. Yes, by conserving "Synaptic Flow" via our new SynFlow algorithm. co-led with Daniel Kunin & @dyamins, @SuryaGanguli paper: http://arxiv.org/abs/2006.05467 1/ https://t.co/Uvzy5jSiB7

6 replies, 586 likes


Surya Ganguli: A new algorithm, SynFlow, for finding winning lottery tickets in deep neural networks without even looking at the data! Yields a highly sparse trainable init. https://arxiv.org/abs/2006.05467 w/ great collaborators @Hidenori8Tanaka Daniel Kunin, @dyamins thread->

7 replies, 298 likes


Surya Ganguli: Wow! a great video, appearing within a few days, of our recent work on a theory of network pruning and our SynFlow algorithm for finding winning lottery tickets in deep neural networks without even looking at data. https://arxiv.org/abs/2006.05467 @Hidenori8Tanaka Daniel Kunin @dyamins

2 replies, 73 likes


Thang Luong: A good sparse network is often obtained by first training a dense one & then prune by magnitude (+reuse the original initialization: the lottery ticket). This paper: no need dense training for finding such sparse structures :) Q: can it generalize to more architecture s & tasks?

0 replies, 56 likes


Stanford NLP Group: Wow! You can produce a very successful sparsified trainable network initialization by pruning WITHOUT examining the problem/data by examining the synaptic flow of a network. https://arxiv.org/abs/2006.05467

0 replies, 55 likes


Hidenori Tanaka: Overall, our data-agnostic pruning algorithm challenges the existing paradigm that data must be used to quantify which synapses are important. Please check out the paper for more details https://arxiv.org/abs/2006.05467 7/

2 replies, 23 likes


Simone Scardapane: *Pruning neural networks without any data by iteratively conserving synaptic flow* Strong paper on network pruning by @Hidenori8Tanaka @dyamins @SuryaGanguli et al. Fact-driven paper that shows a powerful pruning mechanism with no training. https://arxiv.org/abs/2006.05467 https://t.co/bI82PqYeA8

0 replies, 8 likes


HotComputerScience: Most popular computer science paper of the day: "Pruning neural networks without any data by iteratively conserving synaptic flow" https://hotcomputerscience.com/paper/pruning-neural-networks-without-any-data-by-iteratively-conserving-synaptic-flow https://twitter.com/Hidenori8Tanaka/status/1271709221206151168

0 replies, 1 likes


Unbox Research: 1/ Deep learning models are big and expensive to train and run. Recent work found a new approach that reduces model complexity without impacting the model's accuracy and without dependency on training data, improving efficiency and reducing cost. https://arxiv.org/abs/2006.05467

1 replies, 0 likes


Content

Found on Jun 13 2020 at https://arxiv.org/pdf/2006.05467.pdf

PDF content of a computer science paper: Pruning neural networks without any data by iteratively conserving synaptic flow