Responsive Noise-Relaying Diffusion Policy: Responsive and Efficient Visuomotor Control

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing diffusion-based policies for imitation learning generate multi-step actions, resulting in high latency and failing to meet the real-time requirements of robotic visuomotor control. To address this, we propose a noise-relay buffering mechanism coupled with sequential denoising diffusion. Our approach introduces a novel noise-relay buffer that jointly enables instantaneous denoising at the head and progressive denoising at the tail, preserving action continuity while enhancing single-step responsiveness to the latest visual observations. We further incorporate a conditional denoising diffusion model, serialized progressive noise scheduling, and noise-buffer reuse to achieve end-to-end vision-to-action mapping. Experiments demonstrate an 18% improvement in success rate on latency-sensitive tasks and a 6.9% increase in inference speed over the best prior acceleration method on standard tasks—effectively balancing real-time performance and motion consistency.

Technology Category

Application Category

📝 Abstract
Imitation learning is an efficient method for teaching robots a variety of tasks. Diffusion Policy, which uses a conditional denoising diffusion process to generate actions, has demonstrated superior performance, particularly in learning from multi-modal demonstrates. However, it relies on executing multiple actions to retain performance and prevent mode bouncing, which limits its responsiveness, as actions are not conditioned on the most recent observations. To address this, we introduce Responsive Noise-Relaying Diffusion Policy (RNR-DP), which maintains a noise-relaying buffer with progressively increasing noise levels and employs a sequential denoising mechanism that generates immediate, noise-free actions at the head of the sequence, while appending noisy actions at the tail. This ensures that actions are responsive and conditioned on the latest observations, while maintaining motion consistency through the noise-relaying buffer. This design enables the handling of tasks requiring responsive control, and accelerates action generation by reusing denoising steps. Experiments on response-sensitive tasks demonstrate that, compared to Diffusion Policy, ours achieves 18% improvement in success rate. Further evaluation on regular tasks demonstrates that RNR-DP also exceeds the best acceleration method by 6.9%, highlighting its computational efficiency advantage in scenarios where responsiveness is less critical.
Problem

Research questions and friction points this paper is trying to address.

Improving robot action responsiveness
Enhancing visuomotor control efficiency
Reducing computational delay in tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes noise-relaying buffer
Sequential denoising mechanism
Improves action responsiveness
🔎 Similar Papers
No similar papers found.