🤖 AI Summary
This work addresses the challenge of enabling robots to autonomously construct stable structures without relying on predefined architectural blueprints. The authors propose a deep Q-learning reinforcement learning approach based on successor features, which defines tasks through target and obstacle configurations rather than fixed plans. By integrating closed-loop control with a discrete 2D block assembly model, the method enables adaptive decision-making during construction. Notably, this is the first application of successor feature representations to unplanned structural assembly, significantly enhancing the system’s robustness and generalization under environmental uncertainty and construction disturbances. Evaluated across 15 distinct 2D assembly tasks and validated on a physical robot, the framework demonstrates strong resilience to construction noise, confirming its practical feasibility and stability in real-world scenarios.
📝 Abstract
This paper presents a novel autonomous robotic assembly framework for constructing stable structures without relying on predefined architectural blueprints. Instead of following fixed plans, construction tasks are defined through targets and obstacles, allowing the system to adapt more flexibly to environmental uncertainty and variations during the building process. A reinforcement learning (RL) policy, trained using deep Q-learning with successor features, serves as the decision-making component. As a proof of concept, we evaluate the approach on a benchmark of 15 2D robotic assembly tasks of discrete block construction. Experiments using a real-world closed-loop robotic setup demonstrate the feasibility of the method and its ability to handle construction noise. The results suggest that our framework offers a promising direction for more adaptable and robust robotic construction in real-world environments.