🤖 AI Summary
LiDAR system design has long relied on manual trial-and-error, struggling to jointly satisfy task-specific requirements and hardware constraints. This paper introduces the first task-driven, fully automated LiDAR architecture search framework. It models system configurations as an implicit density distribution in a continuous six-dimensional space and pioneers the integration of flow-based generative modeling with expectation-maximization (EM) optimization—enabling differentiable, interpretable, and hard-constraint-compatible architecture search. The method unifies implicit density learning, parametric sensor modeling, and continuous-space representation. Evaluated on real-world 3D vision tasks—including facial scanning, robotic tracking, and object detection—the framework automatically synthesizes high-performance configurations. It achieves over an order-of-magnitude speedup in design efficiency while matching or exceeding the performance of expert-tuned solutions.
📝 Abstract
Imaging system design is a complex, time-consuming, and largely manual process; LiDAR design, ubiquitous in mobile devices, autonomous vehicles, and aerial imaging platforms, adds further complexity through unique spatial and temporal sampling requirements. In this work, we propose a framework for automated, task-driven LiDAR system design under arbitrary constraints. To achieve this, we represent LiDAR configurations in a continuous six-dimensional design space and learn task-specific implicit densities in this space via flow-based generative modeling. We then synthesize new LiDAR systems by modeling sensors as parametric distributions in 6D space and fitting these distributions to our learned implicit density using expectation-maximization, enabling efficient, constraint-aware LiDAR system design. We validate our method on diverse tasks in 3D vision, enabling automated LiDAR system design across real-world-inspired applications in face scanning, robotic tracking, and object detection.