🤖 AI Summary
This work addresses geometric distortion, large viewpoint discrepancies, and fine-detail degradation in cross-view synthesis from satellite to street-level imagery. We propose a dual-branch hybrid framework integrating Stable Diffusion with conditional GANs. Methodologically, it incorporates multi-scale feature alignment, cross-domain attention mechanisms, and multi-stage adversarial training to jointly optimize geometric consistency and visual realism. Our key contribution lies in synergistically combining the semantic generation capability of diffusion models with the local detail modeling strength of GANs—thereby significantly improving reconstruction fidelity of fine structures such as lane markings and secondary roads. Evaluated on the CVUSA dataset, our approach outperforms diffusion-only baselines and matches the performance of state-of-the-art GAN-based methods, achieving—for the first time—controllable generation of geographically consistent, high-fidelity panoramic street-level images.
📝 Abstract
Street view imagery has become an essential source for geospatial data collection and urban analytics, enabling the extraction of valuable insights that support informed decision-making. However, synthesizing street-view images from corresponding satellite imagery presents significant challenges due to substantial differences in appearance and viewing perspective between these two domains. This paper presents a hybrid framework that integrates diffusion-based models and conditional generative adversarial networks to generate geographically consistent street-view images from satellite imagery. Our approach uses a multi-stage training strategy that incorporates Stable Diffusion as the core component within a dual-branch architecture. To enhance the framework's capabilities, we integrate a conditional Generative Adversarial Network (GAN) that enables the generation of geographically consistent panoramic street views. Furthermore, we implement a fusion strategy that leverages the strengths of both models to create robust representations, thereby improving the geometric consistency and visual quality of the generated street-view images. The proposed framework is evaluated on the challenging Cross-View USA (CVUSA) dataset, a standard benchmark for cross-view image synthesis. Experimental results demonstrate that our hybrid approach outperforms diffusion-only methods across multiple evaluation metrics and achieves competitive performance compared to state-of-the-art GAN-based methods. The framework successfully generates realistic and geometrically consistent street-view images while preserving fine-grained local details, including street markings, secondary roads, and atmospheric elements such as clouds.