🤖 AI Summary
Indoor 3D mapping has long suffered from data scarcity due to high cost and low efficiency of manual acquisition. To address this, we propose a human-in-the-loop lightweight 3D indoor modeling framework: it employs consumer-grade drones equipped with visual SLAM and multi-view stereo for rapid dense geometric reconstruction; integrates a lightweight interactive interface to guide users in annotating key points of interest (POIs); and incorporates deep learning–based object detection to enhance semantic recognition accuracy. The method significantly lowers the barrier and cost of indoor mapping, producing high-fidelity geometry-semantic integrated 3D maps across 12 diverse indoor and semi-indoor scenes. User studies confirm that the resulting maps effectively support spatial management and navigation applications. Our core contribution lies in the synergistic coupling of AI-driven automatic reconstruction with human cognitive strengths—particularly semantic reasoning—enabling efficient, accurate, and scalable indoor digital twin generation.
📝 Abstract
Indoor mapping data is crucial for routing, navigation, and building management, yet such data are widely lacking due to the manual labor and expense of data collection, especially for larger indoor spaces. Leveraging recent advancements in commodity drones and photogrammetry, we introduce FlyMeThrough -- a drone-based indoor scanning system that efficiently produces 3D reconstructions of indoor spaces with human-AI collaborative annotations for key indoor points-of-interest (POI) such as entrances, restrooms, stairs, and elevators. We evaluated FlyMeThrough in 12 indoor spaces with varying sizes and functionality. To investigate use cases and solicit feedback from target stakeholders, we also conducted a qualitative user study with five building managers and five occupants. Our findings indicate that FlyMeThrough can efficiently and precisely create indoor 3D maps for strategic space planning, resource management, and navigation.