GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians

๐Ÿ“… 2024-10-02
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing 3D reconstruction methods achieve high fidelity but suffer from highly entangled representations, lacking interpretability and physically grounded editability. To address this, we propose GaussianBlockโ€”a novel hybrid representation that tightly integrates semantic-part-aware differentiable geometric primitives with 3D Gaussian splatting, enabling both strong editability and high fidelity. Our method introduces three key innovations: (i) an attention-guided centering loss for semantic coherence; (ii) a dynamic split-and-merge strategy for adaptive geometry refinement; and (iii) a binding inheritance mechanism for compact, disentangled modeling. By jointly optimizing attention maps driven by 2D semantic priors, coarse geometric primitive selection, and fine-grained Gaussian refinement, GaussianBlock supports pixel-accurate, physically consistent direct editing. Extensive evaluations across multiple benchmarks demonstrate state-of-the-art reconstruction quality, alongside superior disentanglement, composability, and representational compactness.

Technology Category

Application Category

๐Ÿ“ Abstract
Recently, with the development of Neural Radiance Fields and Gaussian Splatting, 3D reconstruction techniques have achieved remarkably high fidelity. However, the latent representations learnt by these methods are highly entangled and lack interpretability. In this paper, we propose a novel part-aware compositional reconstruction method, called GaussianBlock, that enables semantically coherent and disentangled representations, allowing for precise and physical editing akin to building blocks, while simultaneously maintaining high fidelity. Our GaussianBlock introduces a hybrid representation that leverages the advantages of both primitives, known for their flexible actionability and editability, and 3D Gaussians, which excel in reconstruction quality. Specifically, we achieve semantically coherent primitives through a novel attention-guided centering loss derived from 2D semantic priors, complemented by a dynamic splitting and fusion strategy. Furthermore, we utilize 3D Gaussians that hybridize with primitives to refine structural details and enhance fidelity. Additionally, a binding inheritance strategy is employed to strengthen and maintain the connection between the two. Our reconstructed scenes are evidenced to be disentangled, compositional, and compact across diverse benchmarks, enabling seamless, direct and precise editing while maintaining high quality.
Problem

Research questions and friction points this paper is trying to address.

Achieve disentangled 3D scene representations for interpretability
Combine primitives and Gaussians for editable high-fidelity reconstruction
Enable semantically coherent editing via attention-guided part-aware modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid representation combining primitives and 3D Gaussians
Attention-guided centering loss for semantic coherence
Dynamic splitting and fusion strategy for detail refinement
๐Ÿ”Ž Similar Papers
No similar papers found.