🤖 AI Summary
Text-to-image diffusion models exhibit inherent viewpoint bias in multi-view 3D generation and editing, leading to inconsistent cross-view appearance. To address this, we propose TD-Attn—a novel 3D-aware attention framework that, for the first time, mathematically characterizes the origin of this bias. TD-Attn models viewpoint distributions via 3D Gaussian parameterization and employs a semantic-guided tree to precisely localize attention responses. Furthermore, it introduces hierarchical cross-attention modulation across UNet layers to jointly optimize geometric–semantic consistency. As a plug-and-play module, TD-Attn integrates seamlessly into mainstream diffusion models without requiring explicit 3D supervision. Extensive experiments demonstrate significant improvements in multi-view consistency across diverse tasks—including 3D generation, text-driven editing, and novel-view extrapolation—outperforming state-of-the-art methods.
📝 Abstract
Versatile 3D tasks (e.g., generation or editing) that distill from Text-to-Image (T2I) diffusion models have attracted significant research interest for not relying on extensive 3D training data. However, T2I models exhibit limitations resulting from prior view bias, which produces conflicting appearances between different views of an object. This bias causes subject-words to preferentially activate prior view features during cross-attention (CA) computation, regardless of the target view condition. To overcome this limitation, we conduct a comprehensive mathematical analysis to reveal the root cause of the prior view bias in T2I models. Moreover, we find different UNet layers show different effects of prior view in CA. Therefore, we propose a novel framework, TD-Attn, which addresses multi-view inconsistency via two key components: (1) the 3D-Aware Attention Guidance Module (3D-AAG) constructs a view-consistent 3D attention Gaussian for subject-words to enforce spatial consistency across attention-focused regions, thereby compensating for the limited spatial information in 2D individual view CA maps; (2) the Hierarchical Attention Modulation Module (HAM) utilizes a Semantic Guidance Tree (SGT) to direct the Semantic Response Profiler (SRP) in localizing and modulating CA layers that are highly responsive to view conditions, where the enhanced CA maps further support the construction of more consistent 3D attention Gaussians. Notably, HAM facilitates semantic-specific interventions, enabling controllable and precise 3D editing. Extensive experiments firmly establish that TD-Attn has the potential to serve as a universal plugin, significantly enhancing multi-view consistency across 3D tasks.