Papers of the day   All papers

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Comments

Tejas Kulkarni: Our work shows that adding geometric inductive biases in neural nets enables spatio-temporally consistent (hundreds of steps) object keypoints. This enables agents that play Atari games on a single machine with less than 100k steps + deeply explore hard envs without rewards. https://twitter.com/deepmindai/status/1145677732115898368

5 replies, 276 likes


Reza Zadeh: Best paper award at #ICML2019 main idea: unsupervised learning of disentangled representations is fundamentally impossible without inductive biases. Verified theoretically & experimentally. https://arxiv.org/pdf/1811.12359.pdf

0 replies, 206 likes


Olivier Bachem: #OpenScience: We are happy to announce the release of >10'000 pretrained disentanglement_lib models (https://github.com/google-research/disentanglement_lib#pretrained-disentanglement_lib-modules) from the study “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representation” (https://arxiv.org/abs/1811.12359). @GoogleAI https://t.co/G3TDSO5UMb

2 replies, 159 likes


Olivier Bachem: Interested in doing research on disentanglement? We just open-sourced disentanglement_lib (https://github.com/google-research/disentanglement_lib), the library we built for our study “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representation” (https://arxiv.org/abs/1811.12359) @GoogleAI https://t.co/2wG9ZRxHNW

3 replies, 153 likes


Olivier Bachem: Excited that "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations" (https://arxiv.org/abs/1811.12359) was accepted for oral presentation + poster at the #ICLR2019 Workshop on Reproducibility in Machine Learning! https://t.co/JOF0RwHMCe

2 replies, 104 likes


Olivier Bachem: “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations” (https://arxiv.org/abs/1811.12359) was accepted to #ICML2019. Happy that reviewers do value extensive experiments! @FrancescoLocat8 @MarioLucic_ @sylvain_gelly @gxr @GoogleAI @ETH_en @MPI_IS

0 replies, 100 likes


DataScienceNigeria: Congratulations to the Best Papers at the ongoing #ICML2019 (1)Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations https://arxiv.org/pdf/1811.12359.pdf (2)Rates of Convergence for Sparse Variational Gaussian Process Regression https://arxiv.org/pdf/1903.03571.pdf https://t.co/0DlXj6gcWb

2 replies, 74 likes


Vadim Kantorov: @zacharylipton There is "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations" (https://arxiv.org/abs/1811.12359) by Google people, mostly about VAEs

0 replies, 39 likes


augustus odena: @zacharylipton Not sure a survey exists that does a good job covering all of those but https://arxiv.org/abs/1811.12359 is quite nice (disentangling in context of VAEs) and I have *sort of* a GAN survey https://distill.pub/2019/gan-open-problems/ that talks about disentangling-adjacent things?

0 replies, 35 likes


Paul Liang: 2 great papers at #ICML2019 study this theoretical and empirically: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (https://arxiv.org/abs/1811.12359), and Disentangling Disentanglement in Variational Autoencoders (https://arxiv.org/abs/1812.02833)

1 replies, 17 likes


Gautam Kamath: There were two best paper awards. One was "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations" by @FrancescoLocat8, Stefan Bauer, @MarioLucic_, @gxr, @sylvain_gelly, @bschoelkopf , @OlivierBachem (https://arxiv.org/abs/1811.12359) 2/n

1 replies, 13 likes


reza mahmoudi: ICML 2019 Best Paper Award by @icmlconf 1. http://arxiv.org/abs/1811.12359 2. http://arxiv.org/abs/1903.03571 @Montreal_AI #ICML2019 #ai #Machinelearning #DeepLearning https://t.co/Zlx6aMg5TN

0 replies, 10 likes


Hyrum Anderson: Best paper award at #ICML2019 claims that unsupervised learning of disentangled representations for arbitrary data is impossible without inductive bias. Paper: https://arxiv.org/abs/1811.12359 https://t.co/ipcH0DYZqg

3 replies, 9 likes


Reza Zadeh: Best paper award at #ICML2019 main idea: learning disentangled representations in an unsupervised manner is theoretically impossible & empirically very challenging. There's no free lunch with unsupervised learning on Computer Vision. https://arxiv.org/pdf/1811.12359.pdf https://t.co/Pyklengchh

0 replies, 3 likes


小猫遊りょう(たかにゃし・りょう): The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks https://arxiv.org/abs/1803.03635 Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations https://arxiv.org/abs/1811.12359

1 replies, 0 likes


Content

Found on Jul 01 2019 at https://arxiv.org/pdf/1811.12359.pdf

PDF content of a computer science paper: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations