DeepMind: Training data is often collected through a biased process. Models trained on such data are inherently biased. We demonstrate how adversarial training through disentangled representations can reduce the effect of spurious correlations present in datasets: http://arxiv.org/abs/1912.03192 https://t.co/YjJpnY1E8k
2 replies, 427 likes
roadrunner01: Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
abs: https://arxiv.org/abs/1912.03192 https://t.co/Mk7vKyNSgF
0 replies, 37 likes
pushmeet: Excited to share recent work from our team @DeepMindAI that tries to make sure that ML systems generalize properly to specific variations encountered in the real world.
0 replies, 35 likes
HubReports | HubBucket HealthIT Algorithm Auditing: #MachineLearning and #DeepLearning #Algorithm Training #Data is often collected through a #Biased process.
🥇@DeepMindAI shows how Adversarial Training through Disentangled Representations can REDUCE the #Race and #Gender #Bias in #Datasets
0 replies, 9 likes
Sven Gowal: Our latest work demonstrates that it is possible to reduce the effect of bias using adversarial mixing on top of disentangled representations (as provided by StyleGAN models for example).
0 replies, 3 likes
Found on Dec 12 2019 at https://arxiv.org/pdf/1912.03192.pdf