Papers of the day   All papers

What’s Hidden in a Randomly Weighted Neural Network?


Mitchell Wortsman: What's hidden in an overparameterized neural network with random weights? If the distribution is properly scaled (e.g. Kaiming Normal), then it contains a subnetwork which achieves high accuracy without ever modifying the values of the weights... (/n)

23 replies, 827 likes

hardmaru: What's Hidden in a Randomly Weighted Neural Network? “Hidden in a randomly weighted Wide ResNet-50 we show that there is a subnetwork (with random weights) that is smaller than, but matches the performance of a ResNet-34 trained on ImageNet.” 😮

13 replies, 684 likes

Mitchell Wortsman: @RamanujanVivek @anikembhavi @morastegari Alternate title: Randomly weighted neural networks. What do they contain? Do they contain things? Lets find out.

7 replies, 62 likes

Kyunghyun Cho: very interesting, but also not so interesting bc (1) isn't finding a subset of a net eqiv. (almost) to training the net? (2) you sample more, you increase your chance.

3 replies, 32 likes

Taco Cohen: Learning is forgetting

3 replies, 32 likes

Namhoon Lee: (1/2) Interesting! I hope authors also check our work [] (and others), where we found something similar ("neural architecture sculpting"): Compressing a larger network (to match the same # params) can discover a subnetwork that is comparable or even better.

3 replies, 21 likes

Frank Dellaert: Woah...

1 replies, 21 likes

Vivek Ramanujan: Code release for "What's hidden in a randomly weighted neural network?" Code: Arxiv: Discussion thread below

0 replies, 17 likes

Dimitris Papailiopoulos: We study a "pruning is all you need" variant of the lottery ticket hypothesis by Frankle and @mcarbin. In particular, we try to understand the (what I think are) super cool findings of Ramanujan et al. []

2 replies, 16 likes

Roozbeh Mottaghi: Recent work by PRIOR at @allen_ai and UW. It shows a subnetwork of a random network can achieve high performance.

0 replies, 7 likes

Ankur Handa: This is quite interesting and surprising find to me. "In Lottery Ticket Hypothesis: NNs contain sparse subnetworks that can be effectively trained from scratch when reset to their initialization." while... 1/2

1 replies, 7 likes

Sayak Paul: Things that are amazing me recently.

1 replies, 6 likes

Mohammad Rastegari: Thanks to @labs_henry for making a video describing our new paper on What's hidden in a randomly weighted neural network? @RamanujanVivek @Mitchnw

0 replies, 6 likes

Sebastian: Turns out that sufficiently big randomly initialized! neural networks contains a subnetwork that achieves competitive accuracy --> "unreasonable effectiveness of randomly weighted neural networks for image recognition". #deeplearning #ai Read more

0 replies, 4 likes

Florian Aspart: Are you still optimizing the weights of your deep networks with back prop? Why not use networks with random weights instead? Interesting article from @RamanujanVivek, @Mitchnw @morastegari :

1 replies, 4 likes

Mert R. Sabuncu: And the plot thickens 😮

0 replies, 4 likes

Loïc A. Royer 💻🔬⚗️: 🤯

0 replies, 3 likes

Nasim: One intuition (that emerged in a conversation with @anirudhg9119) is that if your model is *that* over-parameterised, it doesn't matter if your weights can move freely or in discrete steps. Also quite reminiscent of Binary Networks of Courbariaux et al. (

0 replies, 3 likes

akira: A study that produces accuracy comparable to that of a normal trained model by learning the nodes combination, not the weight. The connection evaluation is learned by back prop and the inference is performed using only selected in the upper few% connection

0 replies, 3 likes

J. Miguel Valverde 🔻: "What's Hidden in a Randomly Weighted Neural Network?" Good read, especially after reading about WANNs (Weight Agnostic NNs)

0 replies, 2 likes

Arseny Khakhalin: @colejhudson @bayesianbrain Here's another paper from Allen institute I found (haven't read it; but put on the list) that is in-between ticket and agnostic networks. They are looking for good subnetworks (as in ticket) but also want them to work without training (I think?)

1 replies, 2 likes

Jason Taylor: Really interesting work. Hopefully we'll soon see breakthroughs in network initialization that leverage findings like these to allow us to use smaller networks and/or train them significantly faster.

1 replies, 1 likes

William Chamberlain: Can you efficiently find a compass needle in a haystack?

0 replies, 1 likes

arXiv CS-CV: What's Hidden in a Randomly Weighted Neural Network?

0 replies, 1 likes


Found on Dec 02 2019 at

PDF content of a computer science paper: What’s Hidden in a Randomly Weighted Neural Network?