🤖 AI Summary
Modern LLM applications—integrating retrieval-augmented generation, tool calling, and multi-turn interaction—exhibit inherent non-determinism, dynamic behavior, and strong context dependence, posing fundamental challenges to quality assurance. Method: We propose a three-layer analytical framework to identify six root causes of testing challenges; design a full-stack QA system covering system interfaces, prompt orchestration, and model inference; introduce AICL—a lightweight Agent Interaction Control Language—to bridge gaps between software engineering and AI safety in test units, metrics, and lifecycle management; and integrate semantic reinterpretation, runtime monitoring, and closed-loop trustworthy verification. Contribution/Results: Our approach delivers a practical, actionable testing methodology and enables standardized test integration across mainstream agent frameworks, thereby advancing rigorous, scalable, and production-ready evaluation of LLM-based systems.
📝 Abstract
Applications of Large Language Models~(LLMs) have evolved from simple text generators into complex software systems that integrate retrieval augmentation, tool invocation, and multi-turn interactions. Their inherent non-determinism, dynamism, and context dependence pose fundamental challenges for quality assurance. This paper decomposes LLM applications into a three-layer architecture: extbf{ extit{System Shell Layer}}, extbf{ extit{Prompt Orchestration Layer}}, and extbf{ extit{LLM Inference Core}}. We then assess the applicability of traditional software testing methods in each layer: directly applicable at the shell layer, requiring semantic reinterpretation at the orchestration layer, and necessitating paradigm shifts at the inference core. A comparative analysis of Testing AI methods from the software engineering community and safety analysis techniques from the AI community reveals structural disconnects in testing unit abstraction, evaluation metrics, and lifecycle management. We identify four fundamental differences that underlie 6 core challenges. To address these, we propose four types of collaborative strategies (emph{Retain}, emph{Translate}, emph{Integrate}, and emph{Runtime}) and explore a closed-loop, trustworthy quality assurance framework that combines pre-deployment validation with runtime monitoring. Based on these strategies, we offer practical guidance and a protocol proposal to support the standardization and tooling of LLM application testing. We propose a protocol extbf{ extit{Agent Interaction Communication Language}} (AICL) that is used to communicate between AI agents. AICL has the test-oriented features and is easily integrated in the current agent framework.