🤖 AI Summary
This work addresses General Text-to-3D (GT23D) generation, tackling two core challenges simultaneously: semantic consistency (text-3D alignment) and multi-view consistency (cross-view geometric coherence). We propose a joint optimization framework with dual objectives, introducing Triplane Prior Learning (TPL) — the first method to learn geometry-aware triplane priors — and Prior-based Semantic Alignment for Triplanes (SAT), enabling synergistic enhancement of both semantic and geometric fidelity. To further improve cross-view coherence and text alignment, we design Orthogonal Attention, attention-driven cross-modal feature alignment, and diffusion-based arbitrary-view synthesis. Our approach achieves new state-of-the-art performance in multi-view consistency while maintaining top-tier semantic consistency. Experiments demonstrate significant improvements in 3D structural plausibility and text fidelity, establishing a robust foundation for high-fidelity, controllable GT23D generation.
📝 Abstract
General Text-to-3D (GT23D) generation is crucial for creating diverse 3D content across objects and scenes, yet it faces two key challenges: 1) ensuring semantic consistency between input text and generated 3D models, and 2) maintaining multi-view consistency across different perspectives within 3D. Existing approaches typically address only one of these challenges, often leading to suboptimal results in semantic fidelity and structural coherence. To overcome these limitations, we propose SeMv-3D, a novel framework that jointly enhances semantic alignment and multi-view consistency in GT23D generation. At its core, we introduce Triplane Prior Learning (TPL), which effectively learns triplane priors by capturing spatial correspondences across three orthogonal planes using a dedicated Orthogonal Attention mechanism, thereby ensuring geometric consistency across viewpoints. Additionally, we present Prior-based Semantic Aligning in Triplanes (SAT), which enables consistent any-view synthesis by leveraging attention-based feature alignment to reinforce the correspondence between textual semantics and triplane representations. Extensive experiments demonstrate that our method sets a new state-of-the-art in multi-view consistency, while maintaining competitive performance in semantic consistency compared to methods focused solely on semantic alignment. These results emphasize the remarkable ability of our approach to effectively balance and excel in both dimensions, establishing a new benchmark in the field.