Delay-Aware Diffusion Policy: Bridging the Observation-Execution Gap in Dynamic Tasks

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dynamic robotic tasks, perception–actuation latency (tens to hundreds of milliseconds) induces state inconsistency, severely degrading control accuracy and task success rates. To address this, we propose the first policy learning framework that explicitly models empirically measured inference latency: it incorporates measured latency as an explicit conditional input into a diffusion-based policy, jointly optimizing latency-compensated trajectory generation and latency-aware action selection; latency-conditioned augmentation further enables natural generalization from the zero-latency assumption. Our approach is architecture-agnostic, requires no modification to underlying controllers, and integrates seamlessly with diverse imitation learning paradigms. Experiments across multi-task and multi-latency settings demonstrate significantly higher task success rates and improved robustness over baselines. Moreover, our work advances standardized evaluation grounded in empirically measured latency—establishing it as a principled benchmark for real-world robotic learning.

Technology Category

Application Category

📝 Abstract
As a robot senses and selects actions, the world keeps changing. This inference delay creates a gap of tens to hundreds of milliseconds between the observed state and the state at execution. In this work, we take the natural generalization from zero delay to measured delay during training and inference. We introduce Delay-Aware Diffusion Policy (DA-DP), a framework for explicitly incorporating inference delays into policy learning. DA-DP corrects zero-delay trajectories to their delay-compensated counterparts, and augments the policy with delay conditioning. We empirically validate DA-DP on a variety of tasks, robots, and delays and find its success rate more robust to delay than delay-unaware methods. DA-DP is architecture agnostic and transfers beyond diffusion policies, offering a general pattern for delay-aware imitation learning. More broadly, DA-DP encourages evaluation protocols that report performance as a function of measured latency, not just task difficulty.
Problem

Research questions and friction points this paper is trying to address.

Addresses robot action delay in dynamic environments
Compensates for observation-execution gap in policy learning
Enhances robustness to latency in imitation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates inference delays into policy learning
Corrects trajectories with delay compensation
Uses delay conditioning to augment policy robustness
🔎 Similar Papers
No similar papers found.