🤖 AI Summary
Robots struggle to autonomously perform complex maintenance tasks—such as disassembly and assembly—in unstructured environments due to environmental uncertainties, particularly discrepancies between CAD models and real-world scenes.
Method: This paper proposes a closed-loop autonomous execution framework integrating symbolic task planning with multimodal perception. It introduces the first approach that jointly leverages CAD prior models and real-time RGB-D sensory data to dynamically refine the symbolic planner. The framework unifies task parsing, executable instruction generation, and adaptive closed-loop control to enable end-to-end mapping from high-level intent to low-level robot actions.
Results: Experimental validation in realistic maintenance scenarios demonstrates robust performance under ±5 mm pose deviations between model and reality. The system successfully executes fully autonomous disassembly and assembly operations, significantly improving reliability and generalization capability for maintenance tasks in non-structured environments.
📝 Abstract
Automating complex tasks using robotic systems requires skills for planning, control and execution. This paper proposes a complete robotic system for maintenance automation, which can automate disassembly and assembly operations under environmental uncertainties (e.g. deviations between prior plan information). The cognition of the robotic system is based on a planning approach (using CAD and RGBD data) and includes a method to interpret a symbolic plan and transform it to a set of executable robot instructions. The complete system is experimentally evaluated using real-world applications. This work shows the first step to transfer these theoretical results into a practical robotic solution.