A Comparative Study on State-Action Spaces for Learning Viewpoint Selection and Manipulation with Diffusion Policy

šŸ“… 2024-09-22
šŸ›ļø arXiv.org
šŸ“ˆ Citations: 3
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Perception–manipulation coordination remains challenging in static-camera settings—such as robotic surgery or cluttered environments—where limited visual observability constrains effective task execution. Method: This paper proposes a joint learning framework that co-optimizes dynamic camera viewpoints and robotic arm manipulation, centered on systematic analysis of state–action space representations for diffusion policies. Contribution/Results: We quantitatively demonstrate, for the first time, that spectral characteristics—particularly high-frequency components—in state–action representations critically govern policy convergence and robustness. Specifically, a pose representation combining look-at inverse kinematics with Euler angles significantly improves task success rates: it achieves an average 18.7% gain over alternative configurations in both simulation and real-world dual-arm experiments. This validates the superiority of the proposed representation for dexterous perception–manipulation coordination under visual constraints.

Technology Category

Application Category

šŸ“ Abstract
Robotic manipulation tasks often rely on static cameras for perception, which can limit flexibility, particularly in scenarios like robotic surgery and cluttered environments where mounting static cameras is impractical. Ideally, robots could jointly learn a policy for dynamic viewpoint and manipulation. However, it remains unclear which state-action space is most suitable for this complex learning process. To enable manipulation with dynamic viewpoints and to better understand impacts from different state-action spaces on this policy learning process, we conduct a comparative study on the state-action spaces for policy learning and their impacts on the performance of visuomotor policies that integrate viewpoint selection with manipulation. Specifically, we examine the configuration space of the robotic system, the end-effector space with a dual-arm Inverse Kinematics (IK) solver, and the reduced end-effector space with a look-at IK solver to optimize rotation for viewpoint selection. We also assess variants with different rotation representations. Our results demonstrate that state-action spaces utilizing Euler angles with the look-at IK achieve superior task success rates compared to other spaces. Further analysis suggests that these performance differences are driven by inherent variations in the high-frequency components across different state-action spaces and rotation representations.
Problem

Research questions and friction points this paper is trying to address.

Optimizing active perception for dynamic viewpoint selection
Learning simultaneous viewpoint and manipulation coordination
Addressing policy complexity in integrated robotic control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion policy with look-at IK solver
Automatic camera orientation optimization
Integrated learning framework for coordination
šŸ”Ž Similar Papers
No similar papers found.
Xiatao Sun
Xiatao Sun
Ph.D. student, Yale University
F
Francis Fan
Department of Computer Science, Yale University, New Haven, CT 06520, USA
Y
Yinxing Chen
Department of Computer Science, Yale University, New Haven, CT 06520, USA
Daniel Rakita
Daniel Rakita
Yale University
roboticsmotion planningoptimizationmachine learning