Papers of the day   All papers

How Good is the Bayes Posterior in Deep Neural Networks Really?

Comments

Guodong Zhang.: A comprehensive study on Bayesian inference in DNNs. I guess only within Google can you conduct such careful experiments, interesting read! Take-away: Bayesian posterior is rather poor and prior seems to be a big problem (don't scale to large nets). https://arxiv.org/abs/2002.02405

6 replies, 490 likes


Ilya Sutskever: https://arxiv.org/abs/2002.02405 — careful and expensive MCMC Bayesian inference over NN parameters is *worse* than point estimates or low temperature posteriors. Supports @carlesgelada and @jacobmbuckman’s view that Bayesian NNs are not meaningful probably because the prior is wrong.

11 replies, 417 likes


hardmaru: How Good is the Bayes Posterior in Deep Neural Networks Really? “Despite its promise of improved uncertainty quantification and sample efficiency there are—as of early 2020—no publicized deployments of Bayesian neural networks in industrial practice.” https://arxiv.org/abs/2002.02405 https://t.co/3gvxj2BnWf

9 replies, 320 likes


Sebastian Raschka: "In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: [...] the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates obtained from SGD." https://arxiv.org/abs/2002.02405

2 replies, 93 likes


Statistics Papers: How Good is the Bayes Posterior in Deep Neural Networks Really?. http://arxiv.org/abs/2002.02405

0 replies, 23 likes


Rémi Louf 👾🛸✨: This is how debates around BNNs should happen: not by waving hands but with presenting some hard facts. Papers like this open a discussion instead of fueling a war. Kudos to the authors! https://arxiv.org/abs/2002.02405

0 replies, 10 likes


Zhaoran Wang: no wonder why SGLD hardly works for exploration in RL

0 replies, 8 likes


reza mahmoudi: مقاله ی فوق العاده دیگه How Good is the Bayes Posterior in Deep Neural Networks Really? http://arxiv.org/abs/2002.02405 #MachineLearning #artificalintelligence #DeepLearning

0 replies, 7 likes


Pranav Shyam: "Our work questions the goal of accurate posterior approximations in Bayesian deep learning: If the true Bayes posterior is poor, what is the use of more accurate approximations?" 🔥🔥🔥 https://arxiv.org/abs/2002.02405

0 replies, 6 likes


Hugh Harvey: I have tried to read this twice, but my mathematical understanding is not good enough. That said, I think you can just sum this paper up as "no-one really understands Bayes"

1 replies, 4 likes


Machine Learning Tweet Feed: How Good is the Bayes Posterior in Deep Neural Networks Really? https://arxiv.org/abs/2002.02405 Excellent empirical and in-depth paper on further understanding the behavior of why "cold posteriors" perform better than the Bayes.

0 replies, 2 likes


Sebastian Bodenstein: This is an important, if disappointing, conclusion.

0 replies, 2 likes


Piotr Sokol: How Good is the Bayes Posterior in Deep Neural Networks Really?. (arXiv:2002.02405v1 [http://stat.ML]) http://arxiv.org/abs/2002.02405

0 replies, 2 likes


Raphael cohen: "Posteriors are best served cold"

0 replies, 1 likes


Jason M Pittman: [R] How Good is the Bayes Posterior in Deep Neural Networks Really? https://arxiv.org/abs/2002.02405 #MachineLearning

0 replies, 1 likes


Content

Found on Feb 07 2020 at https://arxiv.org/pdf/2002.02405.pdf

PDF content of a computer science paper: How Good is the Bayes Posterior in Deep Neural Networks Really?