🤖 AI Summary
360° panoramic video generation suffers from projection distortion and spatiotemporal inconsistency, primarily due to the incompatibility of conventional video diffusion models with spherical geometry. To address this, we formulate generation as a geometrically aware adaptation from perspective views to panoramic views. We introduce LoRA—first applied to panoramic video generation—and theoretically prove that when its rank exceeds the task’s intrinsic degrees of freedom, it efficiently captures the required view mapping. Leveraging a pre-trained video diffusion model, we fine-tune it with LoRA under explicit spherical projection constraints, achieving high-fidelity generation with only ~1,000 training samples. Experiments demonstrate state-of-the-art performance across visual quality, left-right boundary consistency, motion diversity, and spherical geometric fidelity, while significantly improving computational efficiency and generation robustness.
📝 Abstract
Generating high-quality 360° panoramic videos remains a significant challenge due to the fundamental differences between panoramic and traditional perspective-view projections. While perspective videos rely on a single viewpoint with a limited field of view, panoramic content requires rendering the full surrounding environment, making it difficult for standard video generation models to adapt. Existing solutions often introduce complex architectures or large-scale training, leading to inefficiency and suboptimal results. Motivated by the success of Low-Rank Adaptation (LoRA) in style transfer tasks, we propose treating panoramic video generation as an adaptation problem from perspective views. Through theoretical analysis, we demonstrate that LoRA can effectively model the transformation between these projections when its rank exceeds the degrees of freedom in the task. Our approach efficiently fine-tunes a pretrained video diffusion model using only approximately 1,000 videos while achieving high-quality panoramic generation. Experimental results demonstrate that our method maintains proper projection geometry and surpasses previous state-of-the-art approaches in visual quality, left-right consistency, and motion diversity.