Assembler: Scalable 3D Part Assembly via Anchor Point Diffusion

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of reconstructing complete, high-fidelity 3D objects from an arbitrary number of unstructured input part meshes—varying in geometry, topology, and scale—alongside a single reference image. Methodologically, it pioneers a generative paradigm for part assembly, circumventing the long-standing SE(3) pose estimation bottleneck via a novel sparse anchor-point cloud representation that encodes shape-centric part localization. To support learning, the authors synthesize a large-scale dataset comprising over 320K part-assembly instances and design an image-guided, part-aware diffusion model. Evaluated on PartNet, the framework achieves state-of-the-art performance and, for the first time, enables faithful, editable 3D assembly of complex real-world objects—including chairs and lamps—while supporting interactive, composable 3D modeling systems.

Technology Category

Application Category

📝 Abstract
We present Assembler, a scalable and generalizable framework for 3D part assembly that reconstructs complete objects from input part meshes and a reference image. Unlike prior approaches that mostly rely on deterministic part pose prediction and category-specific training, Assembler is designed to handle diverse, in-the-wild objects with varying part counts, geometries, and structures. It addresses the core challenges of scaling to general 3D part assembly through innovations in task formulation, representation, and data. First, Assembler casts part assembly as a generative problem and employs diffusion models to sample plausible configurations, effectively capturing ambiguities arising from symmetry, repeated parts, and multiple valid assemblies. Second, we introduce a novel shape-centric representation based on sparse anchor point clouds, enabling scalable generation in Euclidean space rather than SE(3) pose prediction. Third, we construct a large-scale dataset of over 320K diverse part-object assemblies using a synthesis and filtering pipeline built on existing 3D shape repositories. Assembler achieves state-of-the-art performance on PartNet and is the first to demonstrate high-quality assembly for complex, real-world objects. Based on Assembler, we further introduce an interesting part-aware 3D modeling system that generates high-resolution, editable objects from images, demonstrating potential for interactive and compositional design. Project page: https://assembler3d.github.io
Problem

Research questions and friction points this paper is trying to address.

Reconstructs 3D objects from part meshes and images
Handles diverse objects with varying part counts and structures
Scales general 3D assembly via generative diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for part assembly
Introduces anchor point cloud representation
Constructs large-scale dataset for training
🔎 Similar Papers
No similar papers found.