Papers of the day   All papers

Invariant Risk Minimization

Comments

Jul 08 2019 jörn jacobsen

Long-awaited and beautiful paper on "Invariant Risk Minimization" by Arjovsky et al. studies relationship between invariance, causality and the many pitfalls of ERM when biasing models to simple functions. Love the Socratic dialogue the paper ends with... https://arxiv.org/abs/1907.02893 https://t.co/kGPRwk2aWY
0 replies, 213 likes


Jul 12 2019 hardmaru

Invariant Risk Minimization is a beautiful paper that I'll need to study deeply. Main idea: “To learn invariances across environments, find a data representation such that the optimal classifier on top of that representation matches for all environments.” https://arxiv.org/abs/1907.02893 https://t.co/S7zLivEDMr
3 replies, 210 likes


Jul 23 2019 Thomas G. Dietterich

@zittrain Machine Learning is at the cusp of a "causality revolution" driven by intellectual debt and the framework of @yudapearl. An important recent paper: Invariant Risk Minimization https://arxiv.org/abs/1907.02893 Two more pioneers in this effort: @eliasbareinboim and @suchisaria
1 replies, 52 likes


Aug 28 2019 Reiichiro Nakano

If anyone's interested in Invariant Risk Minimization (https://arxiv.org/abs/1907.02893) by Arjovsky, et. al, here's my attempt at reproducing the "Colored MNIST" experiments from the paper. Colab: https://colab.research.google.com/github/reiinakano/invariant-risk-minimization/blob/master/invariant_risk_minimization_colored_mnist.ipynb Repo: https://github.com/reiinakano/invariant-risk-minimization https://t.co/bm8pPV8z7y
1 replies, 50 likes


Jul 09 2019 Ishaan Gulrajani

Very happy to share our work on invariance, causality, and out-of-distribution generalization! With Martín Arjovsky, Léon Bottou, David Lopez-Paz.
0 replies, 48 likes


Jul 25 2019 Suchi Saria

Tom, I hope this is the case but I still see reluctance in the community towards stating what’s true in the world (this goes against ML instinct of de novo) & desire to clearly understand whether what’s inferred is correct (because beating SOTA is a common yardstick).
0 replies, 26 likes


Jul 10 2019 Daisuke Okanohara

They propose a new training paradigm "Invariance Risk Minimization" (IRM) to obtain invariant predictors against environmental changes. In addition to ERM, IRM adds the gradient norm penalty of "dummy" classifier for each environment. Very impressive work https://arxiv.org/abs/1907.02893
0 replies, 20 likes


Jul 12 2019 Ferenc Huszár🇪🇺

I have one question: who's Eric, and does he have a Twitter account? I have a few things to say...
1 replies, 14 likes


Jul 08 2019 Victor Veitch

This paper extending ERM to account for causal structure is creative, well-written, and tackles a very interesting and important problem. Most importantly, it's beautiful. Highly recommend. https://arxiv.org/abs/1907.02893
0 replies, 7 likes


Jul 21 2019 Adarsh Subbaswamy

For high dimensional structured inputs (e.g., text/images), it's the same principle behind the recent Invariant Risk Minimization preprint (https://arxiv.org/pdf/1907.02893.pdf) and related ideas therein. 2/2
0 replies, 5 likes


Jul 12 2019 Florian Mai

You may also want to have a look at Bottou's nice ICLR talk about this work. Starts at ~0:12:00. https://www.facebook.com/iclr.cc/videos/534780673594799/
1 replies, 2 likes


Nov 06 2019 Shane Gu

Very cool work from Martin called invariant risk minimization. Summary: out-of-distribution generalization requires disentangling invariant correlations from sporadic ones, using data from different interventions of same causal structure. https://arxiv.org/abs/1907.02893
0 replies, 2 likes


Jul 12 2019 Stephen Borstelmann

This seems true - see second photo
0 replies, 1 likes


Jul 19 2019 Ed Henry

Time well spent these last few days. Arjovski et al.'s Invariant Risk Minimization (IRM) paper has been a great list of references and material worth spending time with. IRM Paper : https://arxiv.org/abs/1907.02893 Book : https://www.amazon.com/Counterfactuals-David-Lewis/dp/0631224955 https://t.co/HFNdynvqTC
0 replies, 1 likes


Jul 10 2019 Jigar Doshi

Also, I like the concluding informal dialogue section. It says "[We also encourage the reader to grab a cup of coffee.]" :-)
0 replies, 1 likes


Content