IndoorBEV: Joint Detection and Footprint Completion of Objects via Mask-based Prediction in Indoor Scenarios for Bird's-Eye View Perception

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional 3D bounding-box detectors suffer from degraded performance in complex indoor point clouds due to coexisting irregularly shaped objects, mixed static and dynamic elements, and severe occlusions. To address this, we propose a mask-based bird’s-eye-view (BEV) joint perception method. Our approach introduces a mask-centric prediction framework that unifies the modeling of ground-level contours for both static and dynamic objects—overcoming geometric modeling limitations inherent to axis-aligned bounding boxes. We employ an axially compact encoder coupled with a windowed backbone network to extract efficient BEV features, and leverage a query-based decoder to generate parallel category- and instance-level masks. Evaluated on a custom multi-class indoor point cloud dataset, our method significantly improves contour completeness and detection robustness in cluttered scenes, producing high-precision BEV masks directly usable for robot navigation and path planning.

Technology Category

Application Category

📝 Abstract
Detecting diverse objects within complex indoor 3D point clouds presents significant challenges for robotic perception, particularly with varied object shapes, clutter, and the co-existence of static and dynamic elements where traditional bounding box methods falter. To address these limitations, we propose IndoorBEV, a novel mask-based Bird's-Eye View (BEV) method for indoor mobile robots. In a BEV method, a 3D scene is projected into a 2D BEV grid which handles naturally occlusions and provides a consistent top-down view aiding to distinguish static obstacles from dynamic agents. The obtained 2D BEV results is directly usable to downstream robotic tasks like navigation, motion prediction, and planning. Our architecture utilizes an axis compact encoder and a window-based backbone to extract rich spatial features from this BEV map. A query-based decoder head then employs learned object queries to concurrently predict object classes and instance masks in the BEV space. This mask-centric formulation effectively captures the footprint of both static and dynamic objects regardless of their shape, offering a robust alternative to bounding box regression. We demonstrate the effectiveness of IndoorBEV on a custom indoor dataset featuring diverse object classes including static objects and dynamic elements like robots and miscellaneous items, showcasing its potential for robust indoor scene understanding.
Problem

Research questions and friction points this paper is trying to address.

Detecting diverse objects in complex indoor 3D point clouds
Handling varied object shapes and clutter in indoor scenes
Distinguishing static obstacles from dynamic agents effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mask-based BEV method for indoor robots
Axis compact encoder for spatial features
Query-based decoder for object detection
Haichuan Li
Haichuan Li
University of Bristol
Robot Physical InteractionRobot sensing
C
Changda Tian
Institute of Computer Science, Foundation for Research and Technology–Hellas, Greece
P
Panos Trahanias
Institute of Computer Science, Foundation for Research and Technology–Hellas, Greece
Tomi Westerlund
Tomi Westerlund
Professor, University of Turku, Finland - DIWA Flagship (https://digitalwaters.fi/)
Internet of ThingsUAVUGVUSVRobots