Papers of the day   All papers

Consistent Video Depth Estimation


roadrunner01: Consistent Video Depth Estimation pdf: abs: project page: video:

9 replies, 1292 likes

Jia-Bin Huang @ #ICLR2020: Check out our #SIGGRAPH2020 paper on Consistent Video Depth Estimation. Our geometrically consistent depth enables cool video effects to a whole new level! Video: Paper: Project page:

13 replies, 944 likes

Reza Zadeh: Depth estimation keeps getting better and better. This one: reconstructing dense, geometrically consistent depth for all pixels in a monocular video. Makes for some fun effects.

4 replies, 494 likes

HCI Research: Consistent Video Depth Estimation @siggraph 2020 #SIGGRAPH Paper: Code/Project:

2 replies, 369 likes

Xuan Luo: Excited to share our work "Consistent Video Depth Estimation" @siggraph 2020! Checkout our video Joint work with @jbhuang0604, Rick Szeliski, Kevin Matzen and Johannes Kopf. Project: Video: arXiv:

3 replies, 285 likes

Jia-Bin Huang: Woohoo! The code is up. Enjoy!… w/ Xuan (@XuanLuo14), Richard, Kevin, and Johannes (@JPKopf)

1 replies, 236 likes

HCI Research: Consistent Video Depth Estimation #SIGGRAPH2020

1 replies, 111 likes

Andrew Davison: Nice multi-view monocular 3D. Not read properly yet, but seems another great example of training a network at test time, rather than forward-passes plus multi-view optimisation. I assume NERF could similarly be used for high quality AR. How do we make all this stuff real-time?

1 replies, 71 likes

Tim Field: Cool effects, but their 4 sec video clip would take your phone 5 days to compute.

1 replies, 45 likes

PatricioGonzalezVivo: So excited to see this video finally out! Early this year I helped making it together with @JPKopf @diosmiodio @oceanquigley

1 replies, 43 likes

Jia-Bin Huang: @XuanLuo14 will be presenting our work on Consistent Video Depth Estimation at #SIGGRAPH2020 at 2 PM PST next Monday! 😃 Video: Code:

0 replies, 30 likes

Theodore Watson: Wow. This is a huge leap forward for depth from RGB. NN estimated but then feeding back projection errors into the training. Via @UnitZeroOne

1 replies, 30 likes

Big Data and AI Toronto: "Consistent Video Depth Estimation" uses a single video input to generate high quality depth maps CC: @xsteenbrugge @John_O_Really @ericjang11 @hardmaru @SpirosMargaris @DrJDrooghaag @Nicochan33 @sebbourguignon @mvollmer1 @Fabriziobustama #AI #ML #DL

0 replies, 15 likes

Noah Snavely: These depth maps look amazing! Nice work, @XuanLuo14 and co-authors!

0 replies, 13 likes

JJ 「cιtιƶεɳƒιvε」 🏴‍☠️: the improvement over the work which came before is staggering

2 replies, 11 likes

Jia-Bin Huang: The #SIGGRAPH2020 talk by @XuanLuo14 is now available! Check it out!

0 replies, 5 likes

Dionisio Blanco: A bit late to the game in sharing this - but I got to do some VFX/Shader work earlier this year to show off this great tech with these awesome people, really happy to see it in the wild 😊.

0 replies, 3 likes

Soumyadip Sengupta: Great result :)

0 replies, 3 likes

/\/\ \/\/: For years depth from photos has been a recurring problem for CG art & motion, this new AI algorithm basically eliminates the issue! #AI 🤩 👏🏼

0 replies, 3 likes

arXiv CS-CV: Consistent Video Depth Estimation

0 replies, 3 likes

Jia-Bin Huang: The code is up! Enjoy! w/ Luo (@XuanLuo14), Richard, Kevin, and Johannes (@JPKopf)

0 replies, 2 likes

Elliott Round: Impressive work !

0 replies, 1 likes

DAC at Virginia Tech: Jia-Bin Huang, @VT_DAC faculty, shares collaborative #SIGGRAPH2020 paper and related youtube video about reconstructing dense, geometrically consistent depth for all pixels in a monocular video. The input is just a hand-held captured cell phone video. @jbhuang0604 @VT_ECE

0 replies, 1 likes


Found on May 01 2020 at

PDF content of a computer science paper: Consistent Video Depth Estimation