Papers of the day   All papers

Meta-Learning with Implicit Gradients

Comments

Sep 11 2019 Chelsea Finn

It's hard to scale meta-learning to long inner optimizations. We introduce iMAML, which meta-learns *without* differentiating through the inner optimization path using implicit differentiation. https://arxiv.org/abs/1909.04630 to appear @NeurIPSConf w/ @aravindr93 @ShamKakade6 @svlevine https://t.co/fBznTaubgr
8 replies, 567 likes


Dec 10 2019 Chelsea Finn

We're presenting our work on meta-learning with implicit differentiation @NeurIPSConf Come find us at the Tuesday evening poster session #47, tomorrow 5:30-7:30 pm. https://t.co/T4FQbKSk2M
0 replies, 154 likes


Sep 11 2019 Sergey Levine

This paper presents a way to use MAML with any inner loop optimizer, differentiable or not, via the implicit function theorem. This makes MAML even more general, and allows for some interesting analysis.
0 replies, 144 likes


Sep 12 2019 Sham Kakade

Excited to share this new work:
0 replies, 17 likes


Dec 10 2019 Sergey Levine

RAIL talks @ #NeurIPS2019 today: 4:10pm (Ballrm A) Causal Confusion https://arxiv.org/abs/1905.11979 4:45pm (Ballrm A) GMPS:https://arxiv.org/abs/1904.00956 5:30pm posters @michaeljanner MBPO #192 https://arxiv.org/abs/1906.08253 @aravindr93 iMAML #47 https://arxiv.org/abs/1909.04630 causal confusion #174 GMPS #42
0 replies, 13 likes


Sep 14 2019 Daisuke Okanohara

Meta-learning requires inner-loop optimization for each task, and implicit differentiation can the gradient directly from the solution. (iMAML) https://arxiv.org/abs/1909.04630 Similar idea is also proposed in hierarchical Bayesian meta-learning setting. https://arxiv.org/abs/1909.05557
0 replies, 8 likes


Content