🤖 AI Summary
AI-driven video generation faces a fundamental bottleneck: misalignment between scene geometry and character motion spaces, resulting in poor temporal coherence and limited directorial control. To address this, we propose the first controllable video synthesis framework explicitly driven by real-world street-level geographic data (e.g., OpenStreetMap and Mapillary), integrating on-location scouting and rehearsal-style interaction into the generative pipeline—enabling map-based location selection, actor/camera placement, motion path sketching, and fine-grained lens parameter adjustment. Technically, our approach unifies Unity’s 3D engine, ComfyUI’s visual workflow interface, and the VACE video generation model to explicitly encode street-scene geometric priors and physically grounded motion constraints. In evaluations involving 12 professional filmmakers, our method significantly outperforms image-to-video baselines in spatial accuracy and scene reconstruction fidelity, while reducing cognitive load and enhancing creative controllability.
📝 Abstract
AI video generation has lowered barriers to video creation, but current tools still struggle with inconsistency. Filmmakers often find that clips fail to match characters and backgrounds, making it difficult to build coherent sequences. A formative study with filmmakers highlighted challenges in shot composition, character motion, and camera control. We present Map2Video, a street view imagery-driven AI video generation tool grounded in real-world geographies. The system integrates Unity and ComfyUI with the VACE video generation model, as well as OpenStreetMap and Mapillary for street view imagery. Drawing on familiar filmmaking practices such as location scouting and rehearsal, Map2Video enables users to choose map locations, position actors and cameras in street view imagery, sketch movement paths, refine camera motion, and generate spatially consistent videos. We evaluated Map2Video with 12 filmmakers. Compared to an image-to-video baseline, it achieved higher spatial accuracy, required less cognitive effort, and offered stronger controllability for both scene replication and open-ended creative exploration.