š¤ AI Summary
To address resource constraints, severe occlusion, and limited wide-area coverage in multi-UAV cooperative 3D detection, this paper proposes AdaBEVāa novel framework for efficient and robust birdās-eye-view (BEV) representation learning. First, a box-guided refinement module adaptively focuses on foreground instance regions. Second, an instance-background contrastive learning mechanism is introduced to enforce discriminative feature separation directly in BEV space. Third, lightweight BEV optimizationāintegrated with 2D supervision and spatial subdivisionāis employed to generate instance-aware BEV representations from low-resolution inputs. By departing from conventional uniform-grid BEV modeling, AdaBEV significantly enhances occlusion robustness and large-scale scene perception. On the Air-Co-Pred benchmark, AdaBEV achieves state-of-the-art accuracy with substantially lower computational overhead, approaching the performance upper bound of high-resolution methods.
š Abstract
Multi-UAV collaborative 3D detection enables accurate and robust perception by fusing multi-view observations from aerial platforms, offering significant advantages in coverage and occlusion handling, while posing new challenges for computation on resource-constrained UAV platforms. In this paper, we present AdaBEV, a novel framework that learns adaptive instance-aware BEV representations through a refine-and-contrast paradigm. Unlike existing methods that treat all BEV grids equally, AdaBEV introduces a Box-Guided Refinement Module (BG-RM) and an Instance-Background Contrastive Learning (IBCL) to enhance semantic awareness and feature discriminability. BG-RM refines only BEV grids associated with foreground instances using 2D supervision and spatial subdivision, while IBCL promotes stronger separation between foreground and background features via contrastive learning in BEV space. Extensive experiments on the Air-Co-Pred dataset demonstrate that AdaBEV achieves superior accuracy-computation trade-offs across model scales, outperforming other state-of-the-art methods at low resolutions and approaching upper bound performance while maintaining low-resolution BEV inputs and negligible overhead.