Papers of the day   All papers

Invariant Risk Minimization

Comments

hardmaru: Invariant Risk Minimization is a beautiful paper that I'll need to study deeply. Main idea: “To learn invariances across environments, find a data representation such that the optimal classifier on top of that representation matches for all environments.” https://arxiv.org/abs/1907.02893 https://t.co/S7zLivEDMr

5 replies, 251 likes


jörn jacobsen: Long-awaited and beautiful paper on "Invariant Risk Minimization" by Arjovsky et al. studies relationship between invariance, causality and the many pitfalls of ERM when biasing models to simple functions. Love the Socratic dialogue the paper ends with... https://arxiv.org/abs/1907.02893 https://t.co/kGPRwk2aWY

0 replies, 223 likes


Yann LeCun: @timnitGebru @soumithchintala Another very new way to identify and minimize bias is the recent work on "Invariant Risk Minimization" by Arjovsky et al. co-authored by my dear friend and colleague Léon Bottou https://arxiv.org/abs/1907.02893 14/N

3 replies, 122 likes


Thomas G. Dietterich: @zittrain Machine Learning is at the cusp of a "causality revolution" driven by intellectual debt and the framework of @yudapearl. An important recent paper: Invariant Risk Minimization https://arxiv.org/abs/1907.02893 Two more pioneers in this effort: @eliasbareinboim and @suchisaria

1 replies, 52 likes


Reiichiro Nakano: If anyone's interested in Invariant Risk Minimization (https://arxiv.org/abs/1907.02893) by Arjovsky, et. al, here's my attempt at reproducing the "Colored MNIST" experiments from the paper. Colab: https://colab.research.google.com/github/reiinakano/invariant-risk-minimization/blob/master/invariant_risk_minimization_colored_mnist.ipynb Repo: https://github.com/reiinakano/invariant-risk-minimization https://t.co/bm8pPV8z7y

1 replies, 50 likes


Ishaan Gulrajani: Very happy to share our work on invariance, causality, and out-of-distribution generalization! With Martín Arjovsky, Léon Bottou, David Lopez-Paz.

0 replies, 48 likes


Daisuke Okanohara: They propose a new training paradigm "Invariance Risk Minimization" (IRM) to obtain invariant predictors against environmental changes. In addition to ERM, IRM adds the gradient norm penalty of "dummy" classifier for each environment. Very impressive work https://arxiv.org/abs/1907.02893

0 replies, 40 likes


Suchi Saria: Tom, I hope this is the case but I still see reluctance in the community towards stating what’s true in the world (this goes against ML instinct of de novo) & desire to clearly understand whether what’s inferred is correct (because beating SOTA is a common yardstick).

0 replies, 26 likes


Federico Pernici: "Learning Representations Using Causal Invariance" Leon Bottou. Invited Talk #ICLR2019 https://videoken.com/embed/8UxS4ls6g1g?tocitem=2 It describes: "Invariant Risk Minimization" 🔥https://arxiv.org/pdf/1907.02893.pdf

0 replies, 16 likes


Ferenc Huszár🇪🇺: I have one question: who's Eric, and does he have a Twitter account? I have a few things to say...

1 replies, 14 likes


David Krueger: Our method, Risk Extrapolation (REx), is a simple and competitive alternative to Invariant Risk Minimization (IRM https://arxiv.org/abs/1907.02893) based on robust optimization. We use negative weights, or a variance penalty, to encourage 𝘀𝘁𝗿𝗶𝗰𝘁 equality of training risks: https://t.co/Yntad8AUhs

2 replies, 8 likes


Ed Henry: Time well spent these last few days. Arjovski et al.'s Invariant Risk Minimization (IRM) paper has been a great list of references and material worth spending time with. IRM Paper : https://arxiv.org/abs/1907.02893 Book : https://www.amazon.com/Counterfactuals-David-Lewis/dp/0631224955 https://t.co/HFNdynvqTC

1 replies, 7 likes


Victor Veitch: This paper extending ERM to account for causal structure is creative, well-written, and tackles a very interesting and important problem. Most importantly, it's beautiful. Highly recommend. https://arxiv.org/abs/1907.02893

0 replies, 7 likes


Alfredo Canziani: @reiinakano @ylecun @kchonyc @joanbruna @heinzedeml https://arxiv.org/abs/1907.02893

0 replies, 6 likes


Adarsh Subbaswamy: For high dimensional structured inputs (e.g., text/images), it's the same principle behind the recent Invariant Risk Minimization preprint (https://arxiv.org/pdf/1907.02893.pdf) and related ideas therein. 2/2

0 replies, 5 likes


Jason Mancuso: @tdietterich @zacharylipton I'm curious how this relates to Invariant Risk Minimization from Arjofsky et al: https://arxiv.org/abs/1907.02893 Unfortunately, no citation in either direction

1 replies, 5 likes


Shane Gu: Very cool work from Martin called invariant risk minimization. Summary: out-of-distribution generalization requires disentangling invariant correlations from sporadic ones, using data from different interventions of same causal structure. https://arxiv.org/abs/1907.02893

0 replies, 4 likes


Florian Mai: You may also want to have a look at Bottou's nice ICLR talk about this work. Starts at ~0:12:00. https://www.facebook.com/iclr.cc/videos/534780673594799/

1 replies, 2 likes


Jigar Doshi: Also, I like the concluding informal dialogue section. It says "[We also encourage the reader to grab a cup of coffee.]" :-)

0 replies, 1 likes


Peter Steinbach: My heart jumps to see these IMHO essentially philosophical discussions in ML related papers. https://arxiv.org/abs/1907.02893 Galilei and other giants on whose shoulders we walk would be delighted. Thanks @zacharylipton @samcharrington @twimlai for sharing this gem. https://t.co/tKq1fExzC4

0 replies, 1 likes


Stephen Borstelmann: This seems true - see second photo

0 replies, 1 likes


Content

Found on Jul 12 2019 at https://arxiv.org/pdf/1907.02893.pdf

PDF content of a computer science paper: Invariant Risk Minimization