Papers of the day   All papers

AMORTIZED LEARNING OF NEURAL CAUSAL REPRESENTATIONS

Comments

Jane Wang: Amortized learning of neural causal representations: Use an attentive relational model to learn causal graphs from interventions in a meta-learning setup with the very talented @rosemary_ke @DaniloJRezende Jovana Mitrovic and Martin Szummer https://arxiv.org/pdf/2008.09301.pdf https://t.co/W7yXiDRQDB

1 replies, 382 likes


Danilo J. Rezende: Causal model induction extends what we usually call "learning" in ML/DL. Here we explored the idea of amortising causal induction for sparse Bayesian networks representing simple boolean circuits. Great to work with the amazing @rosemary_ke and @janexwang and @jovana_mitr

1 replies, 115 likes


Nan Rosemary Ke: Our new paper out on amortized learning of neural representations: learning a fully continuous representation of causal models using neural networks! Thanks to my awesome collaborators @DaniloJRezende @janexwang @jovana_mitr, Martin Zummer https://arxiv.org/pdf/2008.09301.pdf

1 replies, 64 likes


Anirudh Goyal: Work led by @rosemary_ke, @DaniloJRezende and team :)

0 replies, 32 likes


Danilo J. Rezende: @yudapearl We did a very preliminary exploration of this idea here https://arxiv.org/abs/2008.09301 with @rosemary_ke in a toy setting of inferring the structure of sparse DAGs representing logical circuits. But a lot more has to be done for this to be useful in any non-toy problem.

1 replies, 11 likes


Marcus Borba: Amortized learning of neural causal representations. #NeuralNetworks #DataScience #BigData #Analytics #Python #RStats #TensorFlow #IoT #Java #JavaScript #ReactJS #GoLang #Serverless #Linux #Cloud #Programmer #DataMining #DeepLearning #MachineLearning #AI http://arxiv.org/abs/2008.09301 https://t.co/LCBaI2zeAd

0 replies, 9 likes


andrea panizza: Very interesting work about representing (and learning) causal graphs with neural networks instead than with Bayesian networks. The idea of using a decoder to make comparisons against ground truth scalable is very smart!

0 replies, 4 likes


arXiv in review: #ICLR2020 Amortized learning of neural causal representations. (arXiv:2008.09301v1 [stat\.ML]) http://arxiv.org/abs/2008.09301

0 replies, 4 likes


Nick Vintila: "#Causal models can compactly encode the data-generating process ... may generalize better under changes in distribution" "#Bayesian networks scale poorly & cannot leverage previously learned knowledge" causal relational networks (CRN) #NonStationaryLearning #ContinualAI https://t.co/OrOehXTJui

0 replies, 1 likes


arxiv: Amortized learning of neural causal representations. http://arxiv.org/abs/2008.09301 https://t.co/iAydlMj5qp

0 replies, 1 likes


Content

Found on Aug 31 2020 at https://arxiv.org/pdf/2008.09301.pdf

PDF content of a computer science paper: AMORTIZED LEARNING OF NEURAL CAUSAL REPRESENTATIONS