Papers of the day   All papers

What’s Hidden in a Randomly Weighted Neural Network?

Comments

Mitchell Wortsman: What's hidden in an overparameterized neural network with random weights? If the distribution is properly scaled (e.g. Kaiming Normal), then it contains a subnetwork which achieves high accuracy without ever modifying the values of the weights... https://arxiv.org/abs/1911.13299 (/n) https://t.co/RcTcgYGY9J

23 replies, 844 likes


hardmaru: What's Hidden in a Randomly Weighted Neural Network? “Hidden in a randomly weighted Wide ResNet-50 we show that there is a subnetwork (with random weights) that is smaller than, but matches the performance of a ResNet-34 trained on ImageNet.” 😮 https://arxiv.org/abs/1911.13299

14 replies, 700 likes


Mitchell Wortsman: @RamanujanVivek @anikembhavi @morastegari Alternate title: Randomly weighted neural networks. What do they contain? Do they contain things? Lets find out. https://arxiv.org/abs/1911.13299

7 replies, 62 likes


Kyunghyun Cho: very interesting, but also not so interesting bc (1) isn't finding a subset of a net eqiv. (almost) to training the net? (2) you sample more, you increase your chance. https://arxiv.org/abs/1911.13299

3 replies, 32 likes


Taco Cohen: Learning is forgetting

3 replies, 32 likes


Frank Dellaert: Woah...

1 replies, 21 likes


Namhoon Lee: (1/2) Interesting! I hope authors also check our work [https://arxiv.org/abs/1906.06307] (and others), where we found something similar ("neural architecture sculpting"): Compressing a larger network (to match the same # params) can discover a subnetwork that is comparable or even better.

3 replies, 21 likes


Vivek Ramanujan: Code release for "What's hidden in a randomly weighted neural network?" Code: https://github.com/allenai/hidden-networks Arxiv: https://arxiv.org/abs/1911.13299 Discussion thread below

0 replies, 17 likes


Ankur Handa: This is quite interesting and surprising find to me. "In Lottery Ticket Hypothesis: NNs contain sparse subnetworks that can be effectively trained from scratch when reset to their initialization." while... 1/2

1 replies, 7 likes


Roozbeh Mottaghi: Recent work by PRIOR at @allen_ai and UW. It shows a subnetwork of a random network can achieve high performance.

0 replies, 7 likes


Mohammad Rastegari: Thanks to @labs_henry for making a video describing our new paper on What's hidden in a randomly weighted neural network? https://youtu.be/C6Tj8anJO-Q https://arxiv.org/abs/1911.13299 @RamanujanVivek @Mitchnw

0 replies, 6 likes


Sayak Paul: Things that are amazing me recently.

1 replies, 6 likes


Mert R. Sabuncu: And the plot thickens 😮

0 replies, 4 likes


Florian Aspart: Are you still optimizing the weights of your deep networks with back prop? Why not use networks with random weights instead? Interesting article from @RamanujanVivek, @Mitchnw @morastegari : https://arxiv.org/abs/1911.13299

1 replies, 4 likes


Loïc A. Royer 💻🔬⚗️: 🤯

0 replies, 3 likes


Nasim: One intuition (that emerged in a conversation with @anirudhg9119) is that if your model is *that* over-parameterised, it doesn't matter if your weights can move freely or in discrete steps. Also quite reminiscent of Binary Networks of Courbariaux et al. (https://arxiv.org/abs/1602.02830)

0 replies, 3 likes


akira: https://arxiv.org/abs/1911.13299 A study that produces accuracy comparable to that of a normal trained model by learning the nodes combination, not the weight. The connection evaluation is learned by back prop and the inference is performed using only selected in the upper few% connection https://t.co/JMcnttLSkf

0 replies, 3 likes


J. Miguel Valverde 🔻: "What's Hidden in a Randomly Weighted Neural Network?" Good read, especially after reading about WANNs (Weight Agnostic NNs) https://arxiv.org/pdf/1911.13299.pdf https://t.co/PmUm5Eee6a

0 replies, 2 likes


Arseny Khakhalin: @colejhudson @bayesianbrain Here's another paper from Allen institute I found (haven't read it; but put on the list) that is in-between ticket and agnostic networks. They are looking for good subnetworks (as in ticket) but also want them to work without training (I think?) https://arxiv.org/pdf/1911.13299.pdf

1 replies, 2 likes


Jason Taylor: Really interesting work. Hopefully we'll soon see breakthroughs in network initialization that leverage findings like these to allow us to use smaller networks and/or train them significantly faster.

1 replies, 1 likes


William Chamberlain: Can you efficiently find a compass needle in a haystack?

0 replies, 1 likes


arXiv CS-CV: What's Hidden in a Randomly Weighted Neural Network? http://arxiv.org/abs/1911.13299

0 replies, 1 likes


Content

Found on Dec 02 2019 at https://arxiv.org/pdf/1911.13299.pdf

PDF content of a computer science paper: What’s Hidden in a Randomly Weighted Neural Network?