Papers of the day   All papers

NVAE: A Deep Hierarchical Variational Autoencoder


Arash Vahdat: 📢📢📢 Introducing NVAE 📢📢📢 We show that deep hierarchical VAEs w/ carefully designed network architecture, generate high-quality images & achieve SOTA likelihood, even when trained w/ original VAE loss. paper: with @jankautz at @NVIDIAAI (1/n)

17 replies, 1286 likes

Arash Vahdat: 📢 NVAE source code is released! NVAE is deep hierarchical VAE w/ specially designed network architecture that can generate high-quality images & achieve SOTA log-likelihood. Happy coding! paper: code: w/ @jankautz at @NVIDIAAI

4 replies, 442 likes

Arash Vahdat: NVAE's accepted to #NeurIPS2020 as a #spotlight paper! Many thanks to anonymous reviewers & AC who recognized the potential impact of NVAE The code is already released here: You'll soon hear about exciting stuff we've been doing. w/@jankautz @NVIDIAAI

5 replies, 247 likes

David Pfau: Another nail in the coffin for GANs? I'm looking forward to seeing this on ImageNet!

5 replies, 113 likes

Ben Poole: awesome work! been wanting to see bigvae ever since biggan :)

1 replies, 49 likes

Danilo J. Rezende: Nice work!

0 replies, 37 likes

hardmaru: h/t @poolio and @AravSrinivas Scaling up VAEs work surprisingly well for image modelling, compared to more involved generative models that require autoregressive sampling. The recent NVAE by @ArashVahdat et al. also has findings that resonate with this:

0 replies, 36 likes

Arash Vahdat: @tdietterich For the variational version maybe NVAE: paper: code:

0 replies, 27 likes

no love deep learning: it seems the deterministic AE, also known as VQ-VAE2, is not the best generative model whose acronym contains 'VAE' anymore. And this time, it seems the new winner is a VAE!

1 replies, 21 likes

Gal Chechik: Unlike GANs, VAEs are trained to maximize the likelihood of training data. This avoids mode collapse but often generate more out-of-dist samples, so VAEs often generate blurry images. Not anymore! This NVAE paper describes a new arch and how to stabilize training very deep VAEs

0 replies, 15 likes

Michael Albergo (defund/disarm NYPD): It’s cool and all but can we...stop being obsessed with generating faces??

2 replies, 14 likes

Prashnna K Gyawali: Glad to see VAE taking stride in generating high-quality images. Good work 👏

0 replies, 12 likes

Orazio Gallo: So VAEs *can* produce high-quality, very realistic images! Great work @ArashVahdat and @jankautz!!

0 replies, 4 likes

Kevin Yang 楊凱筌: Cool hierarchical VAE architecture (and some training tricks too!) to generate perceptually realistic images. @ArashVahdat @jankautz

0 replies, 3 likes

Alison B. Lowndes ✿: Listening to Prof @fdellaert @GaTech talk about blending factor graphs with VAEs on #drones @rssconf @RoboticsSciSys - try it with @NVIDIAAI's too. I'm working on putting this on Mars too 🚀#robotics

0 replies, 2 likes

Raghav: Seems like this will be the week VAEs will become great, all over again (not that they ever stopped being awesome)! SurVAE [1] bridged VAEs and flow-based models. NVAE [2] with impressive results and couple of neat tricks. [1] [2]

0 replies, 2 likes

Mírian Silva: This is really nice!

0 replies, 2 likes

Kamil Sindi: Wow VAEs are making a comeback. I thought GANs won the debate a while ago.

0 replies, 1 likes

🔥囧Robert Osazuwa Ness囧🔥: Really nice to see this level of quality with a log-likelihood.

0 replies, 1 likes

Arash Vahdat: @zacharylipton I was writing the first paragraphs of this paper when you asked the question:

0 replies, 1 likes


Found on Jul 09 2020 at

PDF content of a computer science paper: NVAE: A Deep Hierarchical Variational Autoencoder