Papers of the day   All papers

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks


Oct 07 2019 Sanjeev Arora

Conventional wisdom: "Not enough data? Use classic learners (Random Forests, RBF SVM, ..), not deep nets." New paper: infinitely wide nets beat these and also beat finite nets. Infinite nets train faster than finite nets here (hint: Neural Tangent Kernel)!
8 replies, 867 likes

Oct 08 2019 Christian Szegedy

Interesting finding: neural tangent kernels do very well on little data.
2 replies, 13 likes

Oct 07 2019 Alain Rakotomamonjy

kernel is not dead !
0 replies, 6 likes

Oct 08 2019 Daisuke Okanohara

NTK and the kernel regression with NTK can be computed exactly ( While in large-data tasks, original NNs are better than NTK, in small-data tasks, NTK is better than NN and also random forest and SVM with other kernels.
0 replies, 3 likes