Papers of the day   All papers

Classification Accuracy Score for Conditional Generative Models


May 29 2019 Jeff Dean

A nice simple way to evaluate generative model effectiveness: does a model trained on just synthetic training data produced by the generative model give good performance on real test data?
6 replies, 279 likes

May 28 2019 Oriol Vinyals

Evaluating generative models is hard! We propose Classification Accuracy Score from classifiers trained on generated data: -Accuracy of 43% when trained purely on BigGAN samples (vs 73%) -Naive data augmentation doesn't work (yet!) Paper: cc @SumanRavuri
4 replies, 257 likes

May 28 2019 Suman Ravuri

.@OriolVinyalsML + I have been thinking about evaluation metrics for generative models. We propose something simple: classification accuracy from classifiers trained on generated data (CAS). CAS can identify which modes have dropped. Other insights here:
1 replies, 176 likes

May 28 2019 Sander Dieleman

Nice results on evaluating generative models, and simple enough for wide adoption! Existing popular metrics (IS, FID) may not tell the whole story, especially when used with non-adversarial models: likelihood-based models get better classification accuracy scores (CAS).
0 replies, 42 likes

May 29 2019 Daniel Roy

Poking around, discovered related work that I don't see cited, unless I missed it. "A Classification-Based Study of Covariate Shift in GAN Distributions" by Santurkar, Schmidt, Madry. In fact, it's hard at first to see differences. @OriolVinyalsML ?
2 replies, 25 likes

May 29 2019 Zhiting Hu

Awesome! We once evaluated text generative models by training classifiers on real & model-synthetic data; observed certain improvement (vs classifiers on only real data). The real datasets were small though ( Great to see systematic studies here on images
0 replies, 13 likes

May 29 2019 Animesh Garg

A pretty interesting result on evaluation of generative models. As suspected GANs have rather low intra-class diversity
0 replies, 6 likes