🤖 AI Summary
This work addresses the vulnerability of single-frame matching to occlusion and viewpoint variation in UAV video geolocalization. To this end, we propose an end-to-end video-to-bird’s-eye-view (BEV) reconstruction paradigm. Methodologically: (i) We introduce Gaussian rasterization-driven BEV reconstruction—replacing conventional polar-coordinate transformation—to achieve geometrically faithful, low-distortion BEV representations; (ii) We leverage diffusion models to synthesize hard negative samples, thereby enhancing cross-platform feature discriminability; (iii) We construct UniV, the first large-scale UAV video dataset explicitly designed for geolocalization. Experiments demonstrate that our method significantly improves recall on UniV, especially under challenging low-altitude (30°) and highly occluded conditions, outperforming existing video-based approaches. This establishes a novel, video-driven paradigm for BEV geolocalization.
📝 Abstract
Existing approaches to drone visual geo-localization predominantly adopt the image-based setting, where a single drone-view snapshot is matched with images from other platforms. Such task formulation, however, underutilizes the inherent video output of the drone and is sensitive to occlusions and viewpoint disparity. To address these limitations, we formulate a new video-based drone geo-localization task and propose the Video2BEV paradigm. This paradigm transforms the video into a Bird's Eye View (BEV), simplifying the subsequent extbf{inter-platform} matching process. In particular, we employ Gaussian Splatting to reconstruct a 3D scene and obtain the BEV projection. Different from the existing transform methods, eg, polar transform, our BEVs preserve more fine-grained details without significant distortion. To facilitate the discriminative extbf{intra-platform} representation learning, our Video2BEV paradigm also incorporates a diffusion-based module for generating hard negative samples. To validate our approach, we introduce UniV, a new video-based geo-localization dataset that extends the image-based University-1652 dataset. UniV features flight paths at $30^circ$ and $45^circ$ elevation angles with increased frame rates of up to 10 frames per second (FPS). Extensive experiments on the UniV dataset show that our Video2BEV paradigm achieves competitive recall rates and outperforms conventional video-based methods. Compared to other competitive methods, our proposed approach exhibits robustness at lower elevations with more occlusions.