DiffPlace: Street View Generation via Place-Controllable Diffusion Model Enhancing Place Recognition

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative models struggle to produce geographically consistent and contextually coherent street-view images conditioned on text, bird’s-eye-view maps, and object bounding boxes, limiting their utility in visual place recognition. This work proposes DiffPlace, a novel framework that, for the first time, integrates place IDs into a diffusion model to enable location-controllable generation of multi-view street scenes. By leveraging a Perceiver Transformer and contrastive learning, DiffPlace maps discrete place identities into the CLIP embedding space, preserving background architectural consistency while allowing flexible control over foreground objects and weather conditions. Experimental results demonstrate that DiffPlace outperforms current methods in both image generation quality and its capacity to support place recognition training, significantly enhancing visual place recognition performance in autonomous driving scenarios.

Technology Category

Application Category

📝 Abstract
Generative models have advanced significantly in realistic image synthesis, with diffusion models excelling in quality and stability. Recent multi-view diffusion models improve 3D-aware street view generation, but they struggle to produce place-aware and background-consistent urban scenes from text, BEV maps, and object bounding boxes. This limits their effectiveness in generating realistic samples for place recognition tasks. To address these challenges, we propose DiffPlace, a novel framework that introduces a place-ID controller to enable place-controllable multi-view image generation. The place-ID controller employs linear projection, perceiver transformer, and contrastive learning to map place-ID embeddings into a fixed CLIP space, allowing the model to synthesize images with consistent background buildings while flexibly modifying foreground objects and weather conditions. Extensive experiments, including quantitative comparisons and augmented training evaluations, demonstrate that DiffPlace outperforms existing methods in both generation quality and training support for visual place recognition. Our results highlight the potential of generative models in enhancing scene-level and place-aware synthesis, providing a valuable approach for improving place recognition in autonomous driving
Problem

Research questions and friction points this paper is trying to address.

place recognition
street view generation
diffusion model
background consistency
place-aware synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

place-controllable diffusion
street view generation
place recognition
multi-view synthesis
contrastive learning
🔎 Similar Papers
No similar papers found.
Ji Li
Ji Li
Principal Group Science Manager at Microsoft
AICAD
Z
Zhiwei Li
S
Shihao Li
Z
Zhenjiang Yu
B
Boyang Wang
H
Haiou Liu