Papers of the day   All papers

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Comments

Oct 07 2019 Sanjeev Arora

Conventional wisdom: "Not enough data? Use classic learners (Random Forests, RBF SVM, ..), not deep nets." New paper: infinitely wide nets beat these and also beat finite nets. Infinite nets train faster than finite nets here (hint: Neural Tangent Kernel)! https://arxiv.org/abs/1910.01663
8 replies, 867 likes


Oct 08 2019 Christian Szegedy

Interesting finding: neural tangent kernels do very well on little data.
2 replies, 13 likes


Oct 07 2019 Alain Rakotomamonjy

kernel is not dead !
0 replies, 6 likes


Oct 08 2019 Daisuke Okanohara

NTK and the kernel regression with NTK can be computed exactly (https://arxiv.org/abs/1904.11955). While in large-data tasks, original NNs are better than NTK, in small-data tasks, NTK is better than NN and also random forest and SVM with other kernels. https://arxiv.org/abs/1910.01663
0 replies, 3 likes


Content