Papers of the day   All papers

On Linear Identifiability of Learned Representations

Comments

Durk Kingma: Are nonlinear features learned by deep discriminative, contrastive, autoregressive etc. models arbitrary? No! We show (theoretically and empirically) that, under mild conditions, you will learn the same features every time you train, up to only a linear transformation.

4 replies, 465 likes


Geoffrey Roeder: New 📑 w/ @Luke_Metz @dpkingma: https://arxiv.org/abs/2007.00810 We prove that a large family of deep discriminative models are identifiable in function space up to linear indeterminacy, presenting empiricism on synthetic & real data. Why should our field care about identifiability?👇 https://t.co/AyjoH2ArDQ

5 replies, 265 likes


Daisuke Okanohara: A large family of NN discriminative family models (e.g., classification, CPC, BERT, GPT), which can be represented as the canonical discriminative form, is identifiable in functional space up to a linear transformation, similarly to nonlinear ICA case. https://arxiv.org/abs/2007.00810

0 replies, 8 likes


arXiv in review: #NeurIPS2020 On Linear Identifiability of Learned Representations. (arXiv:2007.00810v1 [stat\.ML]) http://arxiv.org/abs/2007.00810

0 replies, 5 likes


Leo Dirac: More good theory on how the outcome of training deep nets is more predictable and deterministic than you might expect from random initialization. Good complement to https://arxiv.org/abs/2007.00810

1 replies, 3 likes


Brundage Bot: On Linear Identifiability of Learned Representations. Geoffrey Roeder, Luke Metz, and Diederik P. Kingma http://arxiv.org/abs/2007.00810

1 replies, 2 likes


arxiv: On Linear Identifiability of Learned Representations. http://arxiv.org/abs/2007.00810 https://t.co/WZ83fF2mxu

0 replies, 1 likes


Content

Found on Jul 06 2020 at https://arxiv.org/pdf/2007.00810.pdf

PDF content of a computer science paper: On Linear Identifiability of Learned Representations