🤖 AI Summary
This work addresses the problem of autonomously selecting and assembling functionally equivalent structures from a set of semantically unaligned, off-the-shelf objects—using only RGB images of target objects in natural scenes—to emulate human improvisational craftsmanship. Methodologically, it first employs a mask segmentation network to parse object parts; then simplifies template meshes into primitive geometric shapes and designs a geometry-matching algorithm that jointly leverages local and global scales to retrieve compatible objects; finally integrates large language models with geometric reasoning for object selection and assembly planning. The key contribution is introducing the “creative assembly of non-matching parts” paradigm, which relaxes the conventional requirement of exact part-to-part correspondence in assembly tasks. The approach is validated on two real-world scenarios. Qualitative results demonstrate its feasibility and practical potential for functional reconstruction under semantic and geometric ambiguity.
📝 Abstract
Inspired by traditional handmade crafts, where a person improvises assemblies based on the available objects, we formally introduce the Craft Assembly Task. It is a robotic assembly task that involves building an accurate representation of a given target object using the available objects, which do not directly correspond to its parts. In this work, we focus on selecting the subset of available objects for the final craft, when the given input is an RGB image of the target in the wild. We use a mask segmentation neural network to identify visible parts, followed by retrieving labeled template meshes. These meshes undergo pose optimization to determine the most suitable template. Then, we propose to simplify the parts of the transformed template mesh to primitive shapes like cuboids or cylinders. Finally, we design a search algorithm to find correspondences in the scene based on local and global proportions. We develop baselines for comparison that consider all possible combinations, and choose the highest scoring combination for common metrics used in foreground maps and mask accuracy. Our approach achieves comparable results to the baselines for two different scenes, and we show qualitative results for an implementation in a real-world scenario.