Papers of the day   All papers

Meta-Learning with Implicit Gradients

Comments

Sep 11 2019 Chelsea Finn

It's hard to scale meta-learning to long inner optimizations. We introduce iMAML, which meta-learns *without* differentiating through the inner optimization path using implicit differentiation. https://arxiv.org/abs/1909.04630 to appear @NeurIPSConf w/ @aravindr93 @ShamKakade6 @svlevine https://t.co/fBznTaubgr
8 replies, 557 likes


Sep 11 2019 Sergey Levine

This paper presents a way to use MAML with any inner loop optimizer, differentiable or not, via the implicit function theorem. This makes MAML even more general, and allows for some interesting analysis.
0 replies, 144 likes


Sep 12 2019 Sham Kakade

Excited to share this new work:
0 replies, 17 likes


Sep 14 2019 Daisuke Okanohara

Meta-learning requires inner-loop optimization for each task, and implicit differentiation can the gradient directly from the solution. (iMAML) https://arxiv.org/abs/1909.04630 Similar idea is also proposed in hierarchical Bayesian meta-learning setting. https://arxiv.org/abs/1909.05557
0 replies, 8 likes


Content