🤖 AI Summary
To address the challenge of efficient time-series analysis on resource-constrained edge devices (e.g., smartphones, IoT endpoints), this paper proposes a hardware-aware multi-objective neural architecture querying framework—marking the first effort to reformulate neural architecture search (NAS) as an LLM-coordinated architecture querying task. Our method integrates large language model–driven multimodal time-series understanding (numerical, image, and textual modalities), hardware-constrained multi-objective optimization, and end-to-end Python code generation. Evaluated on 15 benchmark datasets, the automatically generated models achieve a 37% reduction in parameter count, 2.1× inference speedup, and maintain state-of-the-art accuracy—outperforming both hand-crafted models and mainstream NAS approaches. The framework thus unifies model lightweightness, computational efficiency, and cross-dataset generalizability under realistic edge deployment constraints.
📝 Abstract
The growing use of smartphones and IoT devices necessitates efficient time-series analysis on resource-constrained hardware, which is critical for sensing applications such as human activity recognition and air quality prediction. Recent efforts in hardware-aware neural architecture search (NAS) automate architecture discovery for specific platforms; however, none focus on general time-series analysis with edge deployment. Leveraging the problem-solving and reasoning capabilities of large language models (LLM), we propose MONAQ, a novel framework that reformulates NAS into Multi-Objective Neural Architecture Querying tasks. MONAQ is equipped with multimodal query generation for processing multimodal time-series inputs and hardware constraints, alongside an LLM agent-based multi-objective search to achieve deployment-ready models via code generation. By integrating numerical data, time-series images, and textual descriptions, MONAQ improves an LLM's understanding of time-series data. Experiments on fifteen datasets demonstrate that MONAQ-discovered models outperform both handcrafted models and NAS baselines while being more efficient.