Papers of the day   All papers

CHARACTERISING BIAS IN COMPRESSED MODELS

Comments

Sara Hooker: It is not just the data. Popular compression techniques amplify bias in deep neural networks. Work co-led with @nunuska, w @_whatcode, Samy Bengio and @cephaloponderer "Characterizing the bias of compressed models" -- pre-print - https://arxiv.org/abs/2010.03058 https://t.co/RDRNHuIoq0

10 replies, 418 likes


Vitaly Feldman: Nice to see the experiments (https://arxiv.org/pdf/2010.03058.pdf) follow the predictions of theory (https://arxiv.org/abs/1906.05271) for once. Limiting memorization (such as in model compression) has a larger effect on the accuracy of lower-frequency subpopulations.

1 replies, 81 likes


Kyunghyun Cho: perhaps not surprising, but it's certainly not what i thought of: weight pruning is not much of compression but interpolation across different solutions with different properties not captured by a usual test set.

2 replies, 52 likes


Raym Geis: Bias from data compression. β€œThe network is remarkably tolerant of high levels of compression, but cannibalizes performances on underrepresented features in order to preserve top-line metrics.”

0 replies, 25 likes


Matthew Fenech: Deep NNs are often compressed to deal with resource constraints. Great paper here showing that resulting errors disproportionately affect underrepresented parts of dataset, introducing bias. It's not just the data - every choice made by development teams has consequences.

1 replies, 18 likes


Neil Thompson: Discouraging (but important) news: some of the techniques for decreasing the computational burden of deep learning can increase bias.

0 replies, 7 likes


Vikash Sehwag: Interesting! Pruning has a disparate impact on the accuracy of different classes. Great to see increasingly more focus on the impact of pruning on different performance metrics.

1 replies, 6 likes


Rohit Pgarg: Model compression/quantization/pruning disproportionately impacts the unusual training samples.

0 replies, 6 likes


Cody Blakeney: This is why it's very important to preserve not just the accuracy but the decision boundary and internal representations. Pruning is not compression. https://bit.ly/3jTNsbm

0 replies, 2 likes


Simone Scardapane: Model compression amplifies model bias on the CelebA dataset. πŸ‘‡ Makes a lot of sense upon reflection: if you have constraints and want to preserve an average metric, it's easiest to let go of the "end of the tail". Looking forward to "fairness-aware compression" in the future!

1 replies, 2 likes


Content

Found on Oct 13 2020 at https://arxiv.org/pdf/2010.03058.pdf

PDF content of a computer science paper: CHARACTERISING BIAS IN COMPRESSED MODELS