🤖 AI Summary
Millimeter-wave (mmWave) sensing remains impractical for everyday deployment due to high costs of data acquisition and annotation. To address this, we propose the first large language model (LLM)-integrated framework for mmWave data synthesis and interpretation. Our method leverages LLMs to generate semantically rich, scene-customized synthetic radar data, tightly coupled with a physics-informed radar forward model to establish a closed-loop data flywheel—enabling automated, interpretable, and physically grounded data generation. Crucially, the synthesized data supports training downstream perception models with zero-shot generalization capability, eliminating the need for real-world annotations when adapting to novel scenarios. Experiments demonstrate substantial performance gains across key tasks—including pose estimation and activity recognition—while real-device deployment validates practical efficacy. This work bridges foundational LLM capabilities with low-level sensing, enabling their substantive integration into physical-layer perception systems.
📝 Abstract
Millimeter-wave (mmWave) sensing technology holds significant value in human-centric applications, yet the high costs associated with data acquisition and annotation limit its widespread adoption in our daily lives. Concurrently, the rapid evolution of large language models (LLMs) has opened up opportunities for addressing complex human needs. This paper presents mmExpert, an innovative mmWave understanding framework consisting of a data generation flywheel that leverages LLMs to automate the generation of synthetic mmWave radar datasets for specific application scenarios, thereby training models capable of zero-shot generalization in real-world environments. Extensive experiments demonstrate that the data synthesized by mmExpert significantly enhances the performance of downstream models and facilitates the successful deployment of large models for mmWave understanding.