MONAQ: Multi-Objective Neural Architecture Querying for Time-Series Analysis on Resource-Constrained Devices

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of efficient time-series analysis on resource-constrained edge devices (e.g., smartphones, IoT endpoints), this paper proposes a hardware-aware multi-objective neural architecture querying framework—marking the first effort to reformulate neural architecture search (NAS) as an LLM-coordinated architecture querying task. Our method integrates large language model–driven multimodal time-series understanding (numerical, image, and textual modalities), hardware-constrained multi-objective optimization, and end-to-end Python code generation. Evaluated on 15 benchmark datasets, the automatically generated models achieve a 37% reduction in parameter count, 2.1× inference speedup, and maintain state-of-the-art accuracy—outperforming both hand-crafted models and mainstream NAS approaches. The framework thus unifies model lightweightness, computational efficiency, and cross-dataset generalizability under realistic edge deployment constraints.

Technology Category

Application Category

📝 Abstract
The growing use of smartphones and IoT devices necessitates efficient time-series analysis on resource-constrained hardware, which is critical for sensing applications such as human activity recognition and air quality prediction. Recent efforts in hardware-aware neural architecture search (NAS) automate architecture discovery for specific platforms; however, none focus on general time-series analysis with edge deployment. Leveraging the problem-solving and reasoning capabilities of large language models (LLM), we propose MONAQ, a novel framework that reformulates NAS into Multi-Objective Neural Architecture Querying tasks. MONAQ is equipped with multimodal query generation for processing multimodal time-series inputs and hardware constraints, alongside an LLM agent-based multi-objective search to achieve deployment-ready models via code generation. By integrating numerical data, time-series images, and textual descriptions, MONAQ improves an LLM's understanding of time-series data. Experiments on fifteen datasets demonstrate that MONAQ-discovered models outperform both handcrafted models and NAS baselines while being more efficient.
Problem

Research questions and friction points this paper is trying to address.

Efficient time-series analysis on resource-constrained devices
Lack of general time-series NAS for edge deployment
Integrating multimodal inputs for improved LLM understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLM for neural architecture querying
Multimodal query generation for time-series inputs
LLM-based multi-objective search for deployment-ready models
🔎 Similar Papers
No similar papers found.