Scaling Proprioceptive-Visual Learning
with Heterogeneous Pre-trained Transformers

MIT CSAIL, FAIR
TL;DR: HPT aligns different embodiment to the same latent space and investigates in the scaling behaviors measured by the pre-training objectives.

Abstract

One of the key roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation.

This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation datasets and human video datasets, our experiments show that pre-training robotic policies across heterogeneity can exhibit scaling behaviors, to the extent of 50 distinct datasets and 1 billion parameter models. Pre-trained HPTs outperform previous baselines and enhance the finetuned policy performance on many unseen downstream tasks and environments in simulator benchmarks and real-world settings.

HPT Concept

Interpolate start reference image. Heterogeneous Pre-training maps different embodiments, each with its own proprioception and vision sensors, onto a shared latent space by embodiment-specific tokenizers.

HPT Architecture

Interpolate start reference image.
HPT is modularized into stems, trunk, and heads, where the goal is to pre-train a large, shareable transformer.

Dataset Mixture

Interpolate start reference image.
The training mixtures contains over 50 datasets and 200k trajectories from real robot teleops, human videos, simulation, and deployed robots.

Scaling Law

Interpolate start reference image.
HPT shows objective scaling behaviors across multiple axes during training.

Real Robot Videos

Pre-trained HPT policies can perform dynamic and long-horizon contact-rich precision tasks in pet care and assembly.

Simulation Videos

HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings.

▶ Comparison with Single-Task Policy
▶ Comparison with Train-From-Scratch
▶ Comparison with Generalist Policy

Paper

View on arXiv

BibTeX

@inproceedings{wang2024hpt,
author={Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He, Russ Tedrake},
title={Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers},
year={2024},
eprint={2407.16677},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2407.16677}}