🤖 AI Summary
Existing text-to-video generation methods struggle to achieve fine-grained control over camera motion and orientation, primarily due to reliance on relative or ambiguous trajectory representations lacking explicit geometric constraints. To address this, we propose a gravity-anchored absolute camera modeling framework: (1) a novel gravity-aligned Euler angle parameterization that encodes camera pose in an absolute world coordinate system; (2) a zero-pitch condition labeling strategy to decouple conflicting textual semantics from camera instructions; and (3) SpatialVID-HQ-Rebalanced—a new evaluation benchmark supporting large pitch angles. Built upon diffusion models, our method integrates panoramic 360° trajectory synthesis with null-pitch conditioning. Experiments demonstrate substantial improvements in camera trajectory controllability, generation consistency, and geometric plausibility, outperforming state-of-the-art methods on SpatialVID-HQ-Rebalanced.
📝 Abstract
Recent progress in text-to-video generation has achieved remarkable realism, yet fine-grained control over camera motion and orientation remains elusive. Existing approaches typically encode camera trajectories through relative or ambiguous representations, limiting explicit geometric control. We introduce GimbalDiffusion, a framework that enables camera control grounded in physical-world coordinates, using gravity as a global reference. Instead of describing motion relative to previous frames, our method defines camera trajectories in an absolute coordinate system, allowing precise and interpretable control over camera parameters without requiring an initial reference frame. We leverage panoramic 360-degree videos to construct a wide variety of camera trajectories, well beyond the predominantly straight, forward-facing trajectories seen in conventional video data. To further enhance camera guidance, we introduce null-pitch conditioning, an annotation strategy that reduces the model's reliance on text content when conflicting with camera specifications (e.g., generating grass while the camera points towards the sky). Finally, we establish a benchmark for camera-aware video generation by rebalancing SpatialVID-HQ for comprehensive evaluation under wide camera pitch variation. Together, these contributions advance the controllability and robustness of text-to-video models, enabling precise, gravity-aligned camera manipulation within generative frameworks.