Papers of the day   All papers

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

Comments

Dec 12 2019 DeepMind

Training data is often collected through a biased process. Models trained on such data are inherently biased. We demonstrate how adversarial training through disentangled representations can reduce the effect of spurious correlations present in datasets: http://arxiv.org/abs/1912.03192 https://t.co/YjJpnY1E8k
2 replies, 427 likes


Dec 09 2019 roadrunner01

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations pdf: https://arxiv.org/pdf/1912.03192.pdf abs: https://arxiv.org/abs/1912.03192 https://t.co/Mk7vKyNSgF
0 replies, 37 likes


Dec 12 2019 pushmeet

Excited to share recent work from our team @DeepMindAI that tries to make sure that ML systems generalize properly to specific variations encountered in the real world.
0 replies, 35 likes


Dec 24 2019 HubReports | HubBucket HealthIT Algorithm Auditing

#MachineLearning and #DeepLearning #Algorithm Training #Data is often collected through a #Biased process. 🥇@DeepMindAI shows how Adversarial Training through Disentangled Representations can REDUCE the #Race and #Gender #Bias in #Datasets 🖥️http://arxiv.org/abs/1912.03192 https://t.co/uVC3qajfOm
0 replies, 9 likes


Dec 12 2019 Sven Gowal

Our latest work demonstrates that it is possible to reduce the effect of bias using adversarial mixing on top of disentangled representations (as provided by StyleGAN models for example).
0 replies, 3 likes


Content