Papers of the day   All papers

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction


Oct 30 2019 hardmaru 😷

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction Rather than hardcoding forward prediction, we try to get agents to *learn* that they need to predict the future. Check out our #NeurIPS2019 paper!
14 replies, 1291 likes

Oct 30 2019 Emtiyaz Khan

This is really amazing. I always say that “missing values” in reality are not always a problem but they can also be features. Perhaps we learn to predict things better when we know observations will be mostly missing. Congratulations @hardmaru and coauthors!
2 replies, 121 likes

Nov 14 2019 hardmaru

Matthew Crosby made a set of notes about our “Learning to Predict” paper for their reading group:
2 replies, 87 likes

Oct 30 2019 Julian Togelius

Cool stuff. I strongly believe we need to move towards learning forward models in order to learn good policies, as model-free learning is so limited. But don't believe me; in this case, believe the networks, who learned by themselves that it was a good idea to learn a model.
1 replies, 85 likes

Dec 10 2019 hardmaru

Come by our poster this afternoon if you want discuss learning world models that are not forward models! 05:30—07:30 PM #NeurIPS2019 East Exhibition Hall B + C #188
1 replies, 83 likes

Nov 03 2019 ALife Papers

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction by C. D. Freeman, L. Metz and @hardmaru "forward-predictive modeling can arise as a side-effect of optimization under the right circumstances"
0 replies, 76 likes

Oct 30 2019 bucket of kets

This fantastic collaboration with @Luke_Metz, @hardmaru, and myself has been in the works for quite some time. Delighted to see it released! Enjoy David’s beautiful videos!
1 replies, 61 likes

Oct 30 2019 hardmaru 😷

@bucketofkets @Luke_Metz @NeurIPSConf We examine the role of inductive biases in the world model, and show that the architecture of the model plays a role in not only in performance, but also interpretability. Our paper will be presented at #NeurIPS2019: article arxiv
4 replies, 60 likes

Dec 05 2019 hardmaru

Daniel Freeman (@bucketofkets) and @Luke_Metz will be presenting our paper called “Learning to Predict Without Looking Ahead: World Models Without Forward Prediction” poster (Dec 10th) → paper → tweet ↓
2 replies, 33 likes

Nov 05 2019 Tim

I love this paper by Daniel Freeman, @Luke_Metz, & @hardmaru. Simple, but clever & inspiring idea (rather then recently popular "throw more GPUs"). Plus, clearly written, with great visuals 👏
1 replies, 32 likes

Oct 30 2019 Douglas Eck

The cartpole world model animation you see here wonderfully sets the context for the paper. Great work!
0 replies, 22 likes

Dec 10 2019 Shane Gu not @ #NeurIPS2019

An exciting new view on model-based RL. By constraining information and applying some structural assumptions, neural nets can learn amazing things. @hardmaru #NeurIPS2019 #NeurIPS @Luke_Metz @bucketofkets
0 replies, 15 likes

Oct 30 2019 Brandon Rohrer

World model building as a means to an end. "by identifying the important part of the world, policies could be trained significantly more quickly, or more sample efficiently"
1 replies, 11 likes

Oct 30 2019 roadrunner01

Learning to Predict Without Looking Ahead: World Models Without Forward Prediction pdf: abs: webpage:
0 replies, 10 likes

Oct 30 2019 Moritz Schneider

New paper from @hardmaru . I love how well his papers and the related blog posts are made. Looking forward to read this!
2 replies, 7 likes

Dec 10 2019 bucket of kets

Reminder: I’m going to be presenting this today! Come see the non-hexagon version of me, and ask me about inductive biases! (5:30-7:30 pm east exhibit hall b+c #188)
0 replies, 5 likes

Oct 30 2019 Marek Bernát

This is a fascinating lesson in 'less is more'.
0 replies, 3 likes

Jan 01 2020 Evgenii Zheltonozhskii

@zacharylipton by @avdnoord -- I really fascinated with the progress of self-supervised learning by @hardmaru even though I have not finished reading it yet
0 replies, 1 likes

Jan 07 2020 Sebastian Risi

Interestingly, the evolved forward model seems to have learned to predict if a fireball would hit the agent at the current position. Similarly to the great work by @bucketofkets, @Luke_Metz, @hardmaru on observational dropout (,
1 replies, 0 likes