Papers of the day   All papers

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

Comments

Jonathan Fly 👾: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Input is images with known camera poses. Can't wait to disable all the sanity checks and see what this renders if I give it impossible geometry. abs: https://arxiv.org/abs/2003.08934 site: http://www.matthewtancik.com/nerf https://t.co/YLwPyvVwJm

21 replies, 1467 likes


roadrunner01: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis pdf: https://arxiv.org/pdf/2003.08934.pdf abs: https://arxiv.org/abs/2003.08934 project page: http://www.matthewtancik.com/nerf https://t.co/pOKdz3bkRr

8 replies, 457 likes


Krishna Murthy: I've ported the (excellent!) Neural Radiance Fields (NeRF) paper to @PyTorch. Try the (tiny-NeRF) Colab notebook at https://colab.research.google.com/drive/1rO8xo0TemN67d4mTpakrKrLp03b9bgCX or get the full code at https://github.com/krrish94/nerf-pytorch https://twitter.com/jonathanfly/status/1240824698348408833?s=20

1 replies, 244 likes


Ankur Handa: I have been playing with my 2D toy example of NeRF (http://www.matthewtancik.com/nerf) that implemented to understand the role of positional encoding (PE). code: https://github.com/ankurhanda/nerf2D/ left is the dataset image, middle is results with PE and right is without it. It really helps. https://t.co/qfYpNG6vEl

6 replies, 183 likes


Aidan Wolf: My jaw is on the floor

8 replies, 142 likes


Miles Cranmer: This is insanely impressive work. This is a ray-tracing of a 3D environment encoded by a neural net! With this and e.g. @shoyer @samgreydanus @jaschasd's structure design work, it really seems like NNs could be a general tool for reparametrizing high-dim optimization problems

0 replies, 28 likes


BD3D: Wow. From 2D to 3D just like that is not so far away in the future I guess

1 replies, 27 likes


Dragan Okanovic: For 3d gfx people: this seems like Radiance Regression Function (http://cseweb.ucsd.edu/~ravir/274/15/papers/a130-ren.pdf) on super duper steroids. Now let's generate entire games this way!

1 replies, 22 likes


Ankur Handa: This figure in the paper really intrigued me https://arxiv.org/abs/2003.08934 so I set up a very toy 2D example to understand this. The difference in the results is significant. https://t.co/IFIODJKqC5

1 replies, 3 likes


Apoorva Joshi: It's really cool to see researchers taking the time and effort to publish great overview videos with their papers: https://youtu.be/JuH79E8rdKc

0 replies, 3 likes


Sam Dutter: And can this be used for #VR ??

0 replies, 3 likes


Blender Sushi Guy: Ok, wiggle 3D photography / volumetric photography is the future then~

0 replies, 2 likes


Julian Clemens: Amazing work! Also looks perfect for view interpolation for 360 degree volumetric scene captures for VR. @johnsie @MarkusFrei2014

0 replies, 2 likes


Carlos Montero 🥃: Wow, this is pretty amazing!

0 replies, 1 likes


(Pi) Pilar Aranda: 😱😲😃

0 replies, 1 likes


Jacob Garbe 💀: Wow! Seems really amazing for photogrammetry and AR applications? https://www.youtube.com/watch?v=JuH79E8rdKc&feature=emb_title

0 replies, 1 likes


Mike Dopsa: Amazing! Neural filling, from sparse camera views like this, could help create Lightfield-quality output from far fewer cameras. This will be hugely important for things like low-cost volumetric production, and much more. So impressive! Check out the full video!

1 replies, 1 likes


Content

Found on Mar 20 2020 at https://arxiv.org/pdf/2003.08934.pdf

PDF content of a computer science paper: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis