Policy Composition From and For Heterogeneous Robot Learning

Lirui Wang, Alan Zhao, Yilun Du, Ted Adelson, Russ Tedrake
MIT CSAIL
Robotics: Science and Systems (R:SS), 2024

Policy Composition can compose data across domains, modalities, and tasks using diffusion models. The composed policy demonstrates robust scene-level and task-level generalization in generalizable tool-use tasks.


Interpolate start reference image.

Abstract

Training general robotic policies from heterogeneous data for different tasks is a significant challenge. Existing robotic datasets vary in different modalities such as color, depth, tactile, and proprioceptive information, and collected in different domains such as simulation, real robots, and human videos. Current methods usually collect and pool all data from one domain to train a single policy to handle such heterogeneity in tasks and domains, which is prohibitively expensive and difficult.

In this work, we present a flexible approach, dubbed Policy Composition, to combine information across such diverse modalities and domains for learning scene-level and task-level generalized manipulation skills, by composing different data distributions represented with diffusion models. Our method can use task-level composition for multi-task manipulation and be composed with analytic cost functions to adapt policy behaviors at inference time. We train our method on simulation, human, and real robot data and evaluate in tool-use tasks. The composed policy achieves robust and dexterous performance under varying scenes and tasks and outperforms baselines from a single data source in both simulation and real-world experiments.

Videos

Data

Method

Real-Robot Experiments

Real Only

Generalization Axis
instance


Real+Sim

Generalization Axis
instance


Multitask Unconditional

Task
instance


Multitask Compositional

Task
instance