👏🏻 CLAP : Contrastive Latent Action Pretraining

Learning Vision-Language-Action Models from Human Videos

1Tsinghua University 2Astribot 3University of Hong Kong 4MIT
*Equal Contribution ✉ Corresponding Author

CLAP aligns visual latent actions from human videos with proprioceptive latent actions from robot trajectories through contrastive learning, enabling effective skill transfer from abundant human demonstrations to robotic execution.

Abstract

Generalist Vision-Language-Action models are currently hindered by the scarcity of robotic data compared to the abundance of human video demonstrations. Existing Latent Action Models attempt to leverage video data but often suffer from visual entanglement, capturing noise rather than manipulation skills.

To address this, we propose Contrastive Latent Action Pretraining (CLAP), a framework that aligns the visual latent space from videos with a proprioceptive latent space from robot trajectories. By employing contrastive learning, CLAP maps video transitions onto a quantized, physically executable codebook. Building on this representation, we introduce a dual-formulation VLA framework offering both CLAP-NTP, an autoregressive model excelling at instruction following and object generalization, and CLAP-RF, a Rectified Flow-based policy designed for high-frequency, precise manipulation.

Furthermore, we propose a Knowledge Matching (KM) regularization strategy to mitigate catastrophic forgetting during fine-tuning. Extensive experiments demonstrate that CLAP significantly outperforms strong baselines, enabling the effective transfer of skills from human videos to robotic execution.

Method

CLAP Method Overview

Contrastive Latent Action Learning

Unlike conventional methods that rely solely on limited robot teleoperation data, CLAP learns an executable latent action space from large-scale human demonstrations. We align visual dynamics from videos with proprioceptive dynamics from robot trajectories through contrastive learning.

Dual-Formulation VLA

CLAP-NTP retains autoregressive architecture for strong reasoning and instruction-following. CLAP-RF uses Rectified Flow for high-frequency inference (183ms on RTX 3090) with exceptional precision.

Knowledge Matching Regularization

Our KM strategy mitigates catastrophic forgetting during fine-tuning, preserving the semantic knowledge from pretraining while adapting to new tasks.

Real World Experiments

Key Results

+25%
Success Rate Improvement
fine-tuning with human videos
183ms
Inference Latency
CLAP-RF on RTX 3090
3
Robot Platforms
Astribot, AgiBot, Franka

Citation

@article{zhang2026clap,
  title={CLAP: Contrastive Latent Action Pretraining for 
         Learning Vision-Language-Action Models from Human Videos},
  author={Zhang, Chubin and Wang, Jianan and Gao, Zifeng and Su, Yue and Dai, Tianru 
          and Zhou, Cai and Lu, Jiwen and Tang, Yansong},
  journal={arXiv preprint arXiv:2601.04061},
  year={2026}
}