DeepMind: In our new work, we propose a framework for humans teaching robots to accomplish tasks using visual inputs:
3 replies, 328 likes
Nando de Freitas: Data-Driven Robotics: Enabling humans to easily teach robots new behaviours, and applying RL to the robots' continually growing database of real experiences to learn new policies, end-to-end, from pixels, and without simulation or having to run the robot at all.
0 replies, 173 likes
Serkan Cabi: How can robots learn from humans and their own experience to manipulate objects using vision? Here is our take on the problem:
2 replies, 132 likes
Feryal: Very cool work from our group! It was impressive seeing the robot in action! Great work by @serkancabi, Sergio Gomez, @SashaVNovikov, @ks_konyushkova, @scott_e_reed @notmisha @NandoDF @ziyuwang and others at DeepMind!
0 replies, 28 likes
Alexander Novikov: 1/ Proud to be a part of this work! We train batch RL agents to solve robotic manipulation tasks from pixels on a real robot.
1 replies, 19 likes
Misha Denil: Check out our framework for doing large scale Deep RL with a single robot.
One of the most exciting projects I've had the pleasure of working on.
0 replies, 11 likes
Ankur Handa: and in "A Framework for Data-Driven Robotics" by Cabi et al.
Paper: https://arxiv.org/abs/1909.12200 https://t.co/8cYQRFHLJa
1 replies, 1 likes
Found on Sep 27 2019 at https://arxiv.org/pdf/1909.12200.pdf