Papers of the day   All papers

Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin

Comments

Oct 21 2019 Tengyu Ma

A new paper on improving the generalization of deep models (w.r.t clean or robust accuracy) by theory-inspired explicit regularizers. https://arxiv.org/abs/1910.04284
0 replies, 452 likes


Oct 22 2019 Sanjeev Arora

Saw the talk on it at the recent DLTheory workshop @the_IAS. Cool stuff!
0 replies, 51 likes


Oct 21 2019 Hossein Mobahi

For quite sometime (NeurIPS18, ICLR19), we have empirically observed margin at intermediate layers carries significant information about generalization of a deep model. Delighted to see @tengyu has now proved this phenomenon, and provided a cleaner definition of all-layer margin.
0 replies, 30 likes


Oct 22 2019 Daisuke Okanohara

DNN generalization can be predicted accurately by using margin distributions of all layers empirically (https://arxiv.org/abs/1810.00113). This paper shows that the all-layer margin actually has better relationship with generalization theoretically. https://arxiv.org/abs/1910.04284
0 replies, 16 likes


Oct 22 2019 Qu├ębec.AI

"Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin" Colin Wei and Tengyu Ma : https://arxiv.org/abs/1910.04284 #DeepLearning #MachineLearning #NeuralNetworks https://t.co/H9e4dSiOVe
0 replies, 6 likes


Content