Papers of the day   All papers

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors

Comments

Dustin Tran: Excited to release rank-1 Bayesian neural nets, achieving new SOTA on uncertainty & robustness across ImageNet, CIFAR-10/100, and MIMIC. We do extensive ablations to disentangle BNN choices.@dusenberrymw @Ghassen_ML @JasperSnoek @kat_heller @balajiln et al https://arxiv.org/abs/2005.07186 https://t.co/T5sQDkO0xR

3 replies, 436 likes


Jasper: Excited about this new BNN work from Mike Dusenberry, @Ghassen_ML, @dustinvtran, et al. Each layer's weights are modulated by rank-1 factors, learned through variational inference. A mixture of these gets you a very effective and cheap Bayesian ensemble. https://arxiv.org/abs/2005.07186

3 replies, 137 likes


Mike Dusenberry: Excited to present Rank-1 Bayesian Neural Nets, an efficient and scalable approach to variational BNNs via distributions on a rank-1 subspace of the weights. http://arxiv.org/abs/2005.07186 w/ @Ghassen_ML, Yeming Wen, Yi-an Ma, @latentjasper, @kat_heller, @balajiln, @dustinvtran. (1/15) https://t.co/npjJbZyqF7

1 replies, 79 likes


Daisuke Okanohara: Rank-1 Baysian NN multiplies the input and pre-activation neurons by the vectors sampled from the prior, corresponding to multiply weight matrix elementally by a stochastic rank-1 matrix. Very efficient and effective as ensemble NNs. https://arxiv.org/abs/2005.07186

0 replies, 12 likes


Mike Dusenberry: Interested in learning more about Rank-1 BNNs? Check out our virtual poster at ICML! https://icml.cc/virtual/2020/poster/6680

0 replies, 4 likes


Mike Dusenberry: To find out more about Rank-1 BNNs, check out our paper http://arxiv.org/abs/2005.07186 and our code http://github.com/google/edward2/tree/master/experimental/rank1_bnns! (15/15)

0 replies, 2 likes


Content

Found on May 18 2020 at https://arxiv.org/pdf/2005.07186.pdf

PDF content of a computer science paper: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors