🤖 AI Summary
This work addresses the challenge of converting conventional perspective images or videos into 360° panoramic views without relying on precise camera calibration, a limitation of existing methods that hinders their applicability in real-world, uncalibrated scenarios. We propose the first end-to-end framework that requires no geometric priors, leveraging a pre-trained diffusion Transformer to model both input and target equirectangular projection (ERP) panoramas as token sequences and learn their mapping in a purely data-driven manner. By introducing Circular Latent Encoding, our method effectively mitigates seam artifacts at ERP boundaries while implicitly capturing underlying geometric structure. Extensive experiments demonstrate state-of-the-art performance on both image and video 360° generation tasks—surpassing even approaches that utilize ground-truth camera parameters—and show strong zero-shot capabilities in estimating field of view and orientation.
📝 Abstract
Lifting perspective images and videos to 360{\deg} panoramas enables immersive 3D world generation. Existing approaches often rely on explicit geometric alignment between the perspective and the equirectangular projection (ERP) space. Yet, this requires known camera metadata, obscuring the application to in-the-wild data where such calibration is typically absent or noisy. We propose 360Anything, a geometry-free framework built upon pre-trained diffusion transformers. By treating the perspective input and the panorama target simply as token sequences, 360Anything learns the perspective-to-equirectangular mapping in a purely data-driven way, eliminating the need for camera information. Our approach achieves state-of-the-art performance on both image and video perspective-to-360{\deg} generation, outperforming prior works that use ground-truth camera information. We also trace the root cause of the seam artifacts at ERP boundaries to zero-padding in the VAE encoder, and introduce Circular Latent Encoding to facilitate seamless generation. Finally, we show competitive results in zero-shot camera FoV and orientation estimation benchmarks, demonstrating 360Anything's deep geometric understanding and broader utility in computer vision tasks. Additional results are available at https://360anything.github.io/.