Papers of the day   All papers

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks


Oct 07 2019 Sanjeev Arora

Conventional wisdom: "Not enough data? Use classic learners (Random Forests, RBF SVM, ..), not deep nets." New paper: infinitely wide nets beat these and also beat finite nets. Infinite nets train faster than finite nets here (hint: Neural Tangent Kernel)!
9 replies, 885 likes

Dec 20 2019 Simon Shaolei Du

Our paper is accepted in #ICLR2020 for spotlight presentation!
0 replies, 51 likes

Dec 05 2019 Greg Yang

@remilouf Hi Remi, thanks for the compliment! In a way, yes: a recent work have shown that kernels obtained thru this limiting GP (or another related kernel, the Neural Tangent Kernel) outperform all other methods when there are not much data. However ...
1 replies, 14 likes

Oct 08 2019 Christian Szegedy

Interesting finding: neural tangent kernels do very well on little data.
2 replies, 13 likes

Oct 08 2019 Daisuke Okanohara

NTK and the kernel regression with NTK can be computed exactly ( While in large-data tasks, original NNs are better than NTK, in small-data tasks, NTK is better than NN and also random forest and SVM with other kernels.
0 replies, 7 likes

Oct 07 2019 Alain Rakotomamonjy

kernel is not dead !
0 replies, 6 likes