🤖 AI Summary
Radar angular resolution is fundamentally limited by the Rayleigh criterion, and existing deep learning approaches rely heavily on large-scale paired radar–LiDAR datasets—entailing high acquisition costs and susceptibility to calibration errors. To address these limitations, we propose an unsupervised radar point cloud super-resolution method that formulates angular estimation as an inverse problem and, for the first time, integrates a diffusion model conditioned on arbitrary LiDAR priors as a cross-modal domain prior—eliminating dependence on paired ground truth. Our framework jointly optimizes inverse problem solving and multimodal prior modeling without requiring any supervised signals, enabling high-fidelity reconstruction. Experiments demonstrate that our method achieves denoising and resolution enhancement performance comparable to fully supervised counterparts, while exhibiting significantly stronger generalization to unseen scenes. This work establishes a new paradigm for radar perception under hardware constraints.
📝 Abstract
In industrial automation, radar is a critical sensor in machine perception. However, the angular resolution of radar is inherently limited by the Rayleigh criterion, which depends on both the radar's operating wavelength and the effective aperture of its antenna array.To overcome these hardware-imposed limitations, recent neural network-based methods have leveraged high-resolution LiDAR data, paired with radar measurements, during training to enhance radar point cloud resolution. While effective, these approaches require extensive paired datasets, which are costly to acquire and prone to calibration error. These challenges motivate the need for methods that can improve radar resolution without relying on paired high-resolution ground-truth data. Here, we introduce an unsupervised radar points enhancement algorithm that employs an arbitrary LiDAR-guided diffusion model as a prior without the need for paired training data. Specifically, our approach formulates radar angle estimation recovery as an inverse problem and incorporates prior knowledge through a diffusion model with arbitrary LiDAR domain knowledge. Experimental results demonstrate that our method attains high fidelity and low noise performance compared to traditional regularization techniques. Additionally, compared to paired training methods, it not only achieves comparable performance but also offers improved generalization capability. To our knowledge, this is the first approach that enhances radar points output by integrating prior knowledge via a diffusion model rather than relying on paired training data. Our code is available at https://github.com/yyxr75/RadarINV.