🤖 AI Summary
This work addresses the challenge of localizing novel defects in large language model (LLM)-integrated software, which arise from mismatches among heterogeneous components such as prompts, APIs, configurations, and model outputs. To tackle this, the authors propose a code knowledge graph enhanced with LLM-aware annotations, coupled with a multi-agent collaborative reasoning framework. By integrating three types of error evidence and incorporating counterfactual context validation, the approach enables cross-layer semantic reasoning and precise distinction between symptoms and root causes. This is the first method to combine knowledge graphs with multi-agent mechanisms for defect localization in LLM-integrated systems. Evaluated on 146 real-world defects, it achieves a Top-3 accuracy of 0.64 and a mean average precision (MAP) of 0.48—representing a 64.1% improvement over the best baseline—while reducing localization costs by 92.5%.
📝 Abstract
LLM-integrated software, which embeds or interacts with large language models (LLMs) as functional components, exhibits probabilistic and context-dependent behaviors that fundamentally differ from those of traditional software. This shift introduces a new category of integration defects that arise not only from code errors but also from misaligned interactions among LLM-specific artifacts, including prompts, API calls, configurations, and model outputs. However, existing defect localization techniques are ineffective at identifying these LLM-specific integration defects because they fail to capture cross-layer dependencies across heterogeneous artifacts, cannot exploit incomplete or misleading error traces, and lack semantic reasoning capabilities for identifying root causes. To address these challenges, we propose LIDL, a multi-agent framework for defect localization in LLM-integrated software. LIDL (1) constructs a code knowledge graph enriched with LLM-aware annotations that represent interaction boundaries across source code, prompts, and configuration files, (2) fuses three complementary sources of error evidence inferred by LLMs to surface candidate defect locations, and (3) applies context-aware validation that uses counterfactual reasoning to distinguish true root causes from propagated symptoms. We evaluate LIDL on 146 real-world defect instances collected from 105 GitHub repositories and 16 agent-based systems. The results show that LIDL significantly outperforms five state-of-the-art baselines across all metrics, achieving a Top-3 accuracy of 0.64 and a MAP of 0.48, which represents a 64.1% improvement over the best-performing baseline. Notably, LIDL achieves these gains while reducing cost by 92.5%, demonstrating both high accuracy and cost efficiency.