🤖 AI Summary
Addressing severe geometric distortion and azimuthal discontinuity in 360° panoramic image generation, this paper proposes a single-iteration diffusion-based paradigm leveraging tangent-plane grids. Methodologically: (1) it introduces the first unified diffusion model that jointly synthesizes full-360° tangent-plane images; (2) it designs a model-agnostic global consistency post-processing module that explicitly enforces azimuthal continuity; and (3) it establishes TangentIS/TangentFID—the first panoramic-specific evaluation metrics—along with a dedicated benchmark dataset. Experiments demonstrate substantial improvements in geometric fidelity, visual coherence, and semantic alignment for text-to-panoramic generation. The method achieves high fidelity without sacrificing diversity, even under complex textual prompts, and seamlessly integrates with diverse diffusion architectures, exhibiting strong generalization capability.
📝 Abstract
Recent advances in image generation have led to remarkable improvements in synthesizing perspective images. However, these models still struggle with panoramic image generation due to unique challenges, including varying levels of geometric distortion and the requirement for seamless loop-consistency. To address these issues while leveraging the strengths of the existing models, we introduce TanDiT, a method that synthesizes panoramic scenes by generating grids of tangent-plane images covering the entire 360$^circ$ view. Unlike previous methods relying on multiple diffusion branches, TanDiT utilizes a unified diffusion model trained to produce these tangent-plane images simultaneously within a single denoising iteration. Furthermore, we propose a model-agnostic post-processing step specifically designed to enhance global coherence across the generated panoramas. To accurately assess panoramic image quality, we also present two specialized metrics, TangentIS and TangentFID, and provide a comprehensive benchmark comprising captioned panoramic datasets and standardized evaluation scripts. Extensive experiments demonstrate that our method generalizes effectively beyond its training data, robustly interprets detailed and complex text prompts, and seamlessly integrates with various generative models to yield high-quality, diverse panoramic images.