🤖 AI Summary
This work addresses two key practical bottlenecks in industrial test data analysis: (1) difficulty in capturing diverse engineering requirements and (2) insufficient system scalability. To this end, we propose a deployable AI agent inference plugin built upon LangGraph. Methodologically, we introduce— for the first time—a large language model (LLM)-driven frontend for dynamic requirement understanding and task orchestration, overcoming limitations of conventional static interfaces; further, we integrate the Intelligent Engineering Assistant (IEA) architecture with a backend test-data toolchain to enable an end-to-end closed loop spanning requirement parsing, task scheduling, and code generation. Compared to the prior IEA-Plot system, our approach improves requirement parsing accuracy by 32%, increases automated analysis task coverage by 47%, and supports real-time inference at thousand-scale concurrent requests—significantly enhancing robustness and scalability in industrial deployment scenarios.
📝 Abstract
This paper introduces IEA-plugin, a novel AI agent-based reasoning module developed as a new front-end for the Intelligent Engineering Assistant (IEA). The primary objective of IEA-plugin is to utilize the advanced reasoning and coding capabilities of Large Language Models (LLMs) to effectively address two critical practical challenges: capturing diverse engineering requirements and improving system scalability. Built on the LangGraph agentic programming platform, IEAplugin is specifically tailored for industrial deployment and integration with backend test data analytics tools. Compared to the previously developed IEA-Plot (introduced two years ago), IEA-plugin represents a significant advancement, capitalizing on recent breakthroughs in LLMs to deliver capabilities that were previously unattainable.