Papers of the day   All papers

Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors


Dustin Tran: Excited to release rank-1 Bayesian neural nets, achieving new SOTA on uncertainty & robustness across ImageNet, CIFAR-10/100, and MIMIC. We do extensive ablations to disentangle BNN choices.@dusenberrymw @Ghassen_ML @JasperSnoek @kat_heller @balajiln et al

3 replies, 436 likes

Jasper: Excited about this new BNN work from Mike Dusenberry, @Ghassen_ML, @dustinvtran, et al. Each layer's weights are modulated by rank-1 factors, learned through variational inference. A mixture of these gets you a very effective and cheap Bayesian ensemble.

3 replies, 137 likes

Mike Dusenberry: Excited to present Rank-1 Bayesian Neural Nets, an efficient and scalable approach to variational BNNs via distributions on a rank-1 subspace of the weights. w/ @Ghassen_ML, Yeming Wen, Yi-an Ma, @latentjasper, @kat_heller, @balajiln, @dustinvtran. (1/15)

1 replies, 79 likes

Daisuke Okanohara: Rank-1 Baysian NN multiplies the input and pre-activation neurons by the vectors sampled from the prior, corresponding to multiply weight matrix elementally by a stochastic rank-1 matrix. Very efficient and effective as ensemble NNs.

0 replies, 12 likes

Mike Dusenberry: Interested in learning more about Rank-1 BNNs? Check out our virtual poster at ICML!

0 replies, 4 likes

Mike Dusenberry: To find out more about Rank-1 BNNs, check out our paper and our code! (15/15)

0 replies, 2 likes


Found on May 18 2020 at

PDF content of a computer science paper: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors