Robotic AI & Learning Lab

Deep Robotic Learning using Visual Imagination & Meta-Learning

Demonstration at NIPS 2017

Project Lead: Chelsea Finn
Demo Engineering & Design: Annie Xie*, Sudeep Dasari*, Frederik Ebert, Tianhe Yu
One-Shot Visual Imitation Learning (paper): Chelsea Finn*, Tianhe Yu*, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Planning with Visual Foresight (paper): Frederik Ebert, Chelsea Finn, Alex Lee, Sergey Levine

A key, unsolved challenge for learning with real robotic systems is the ability to acquire vision-based behaviors from raw RGB images that can generalize to new objects and new goals. We present two approaches to this goal that we demonstrate: first, learning task-agnostic visual models for planning, which can generalize to new objects and goals, and second, learning to quickly adapt to new objects and environments using meta-imitation learning. In essence, these two approaches seek to generalize and dynamically adapt to new settings, respectively.

Video taken by Andrei Rusu.