🤖 AI Summary
To address the challenge of achieving high-precision, real-time contextual awareness on mobile devices, this paper proposes an Augmented Virtual Environment (AVE) construction method that fuses smartphone-captured imagery with open-source geospatial data. Our approach dynamically registers 2D smartphone images with heterogeneous geospatial sources—including OpenStreetMap (OSM) vector data and Digital Terrain Models (DTMs)—to correct projective distortions and enable dynamic scene modeling. Data preprocessing and geometric calibration are implemented in Python, while lightweight, immersive 3D visualization is realized using the Unity engine. A low-latency, bidirectional communication architecture is established via UDP. Experimental results demonstrate accurate reconstruction of real-world geographic scenes; preliminary user evaluation confirms a significant improvement in contextual understanding (+37.2%). This work establishes a novel, cost-effective, and scalable paradigm for mobile augmented perception.
📝 Abstract
This paper presents the development of an interactive system for constructing Augmented Virtual Environments (AVEs) by fusing mobile phone images with open-source geospatial data. By integrating 2D image data with 3D models derived from sources such as OpenStreetMap (OSM) and Digital Terrain Models (DTM), the proposed system generates immersive environments that enhance situational context. The system leverages Python for data processing and Unity for 3D visualization, interconnected via UDP-based two-way communication. Preliminary user evaluation demonstrates that the resulting AVEs accurately represent real-world scenes and improve users' contextual understanding. Key challenges addressed include projector calibration, precise model construction from heterogeneous data, and object detection for dynamic scene representation.