ADPro: a Test-time Adaptive Diffusion Policy for Robot Manipulation via Manifold and Initial Noise Constraints

๐Ÿ“… 2025-08-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing diffusion-based policies for robotic manipulation neglect geometric and control priors, resulting in unguided action generation within task-relevant subspaces, redundant initial noise, and slow convergence. To address this, we propose ADP (Adaptive Diffusion Policy), a test-time adaptation method that imposes geodesic path constraints on the manipulation manifold to structurally guide the denoising process, while incorporating an analytical initial action estimate to suppress ineffective exploration. ADP requires no retraining and achieves plug-and-play task adaptation via manifold projection and pose registration alone. Experiments on RLBench, CALVIN, and real-world robotic platforms demonstrate that ADP improves success rates by up to 9% over strong baselines, accelerates execution by 25%, and significantly enhances sampling efficiency and cross-environment generalization.

Technology Category

Application Category

๐Ÿ“ Abstract
Diffusion policies have recently emerged as a powerful class of visuomotor controllers for robot manipulation, offering stable training and expressive multi-modal action modeling. However, existing approaches typically treat action generation as an unconstrained denoising process, ignoring valuable a priori knowledge about geometry and control structure. In this work, we propose the Adaptive Diffusion Policy (ADP), a test-time adaptation method that introduces two key inductive biases into the diffusion. First, we embed a geometric manifold constraint that aligns denoising updates with task-relevant subspaces, leveraging the fact that the relative pose between the end-effector and target scene provides a natural gradient direction, and guiding denoising along the geodesic path of the manipulation manifold. Then, to reduce unnecessary exploration and accelerate convergence, we propose an analytically guided initialization: rather than sampling from an uninformative prior, we compute a rough registration between the gripper and target scenes to propose a structured initial noisy action. ADP is compatible with pre-trained diffusion policies and requires no retraining, enabling test-time adaptation that tailors the policy to specific tasks, thereby enhancing generalization across novel tasks and environments. Experiments on RLBench, CALVIN, and real-world dataset show that ADPro, an implementation of ADP, improves success rates, generalization, and sampling efficiency, achieving up to 25% faster execution and 9% points over strong diffusion baselines.
Problem

Research questions and friction points this paper is trying to address.

Constraining robot action generation with geometric knowledge
Improving diffusion policy efficiency via structured initialization
Enhancing generalization across novel tasks without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometric manifold constraint aligns denoising updates
Analytically guided initialization reduces exploration
Test-time adaptation without retraining enhances generalization
๐Ÿ”Ž Similar Papers
No similar papers found.
Zezeng Li
Zezeng Li
Dalian University of Technology
Computer VisionGenerative Model
R
Rui Yang
ร‰cole Centrale de Lyon, France
R
Ruochen Chen
ร‰cole Centrale de Lyon, France
Z
ZhongXuan Luo
School of Software, Dalian University of Technology, China
L
Liming Chen
ร‰cole Centrale de Lyon, France