Explicit-to-Implicit Robot Imitation Learning
by Exploring Visual Content Change
1School of Control Science and Engineering, Shandong University 2Department of Automation, Shanghai Jiao Tong University
Abstract
Demonstration understanding is the vital component for robot imitation learning. In this work, we investigate the visual change based representation of the demonstration and build imitation learning pipelines in both explicit and implicit ways. Specifically, we first propose to represent the demonstration video via the visual change map and utilize it to generate explicit commands for robot execution. To pursue a more "human-like" imitation learning pipeline, an implicit method is presented by extending the visual change based representation from image level to feature level. Extensive experiments are conducted to evaluate the proposed methods and the results show that the proposed visual change based approaches achieve the state-of-the-art imitation learning performance. Also, the results indicate the superiority of the implicit method over the explicit one for imitation learning.