🤖 AI Summary
Defect localization in large codebases is hindered by scattered concerns (across files) and tangled concerns (within functions). To address this, we propose RepoLens—a novel approach that first clusters fine-grained functional units into semantically coherent, high-level concerns and constructs a repository-scale conceptual knowledge base. RepoLens operates in two phases: an offline phase employing static analysis, semantic clustering, and knowledge extraction to build the knowledge base; and an online phase performing lightweight retrieval and knowledge-augmented prompting for large language models (LLMs), without requiring fine-tuning or additional training. This design significantly improves both localization accuracy and cross-project generalizability. Evaluated on the SWE-Lancer-Loc benchmark, RepoLens achieves average improvements of 22% in Hit@k and 46% in Recall@k, with peak gains reaching 504% and 376%, respectively—demonstrating its effectiveness and robustness.
📝 Abstract
Issue localization, which identifies faulty code elements such as files or functions, is critical for effective bug fixing. While recent LLM-based and LLM-agent-based approaches improve accuracy, they struggle in large-scale repositories due to concern mixing, where relevant logic is buried in large functions, and concern scattering, where related logic is dispersed across files.
To address these challenges, we propose RepoLens, a novel approach that abstracts and leverages conceptual knowledge from code repositories. RepoLens decomposes fine-grained functionalities and recomposes them into high-level concerns, semantically coherent clusters of functionalities that guide LLMs. It operates in two stages: an offline stage that extracts and enriches conceptual knowledge into a repository-wide knowledge base, and an online stage that retrieves issue-specific terms, clusters and ranks concerns by relevance, and integrates them into localization workflows via minimally intrusive prompt enhancements. We evaluate RepoLens on SWE-Lancer-Loc, a benchmark of 216 tasks derived from SWE-Lancer. RepoLens consistently improves three state-of-the-art tools, namely AgentLess, OpenHands, and mini-SWE-agent, achieving average gains of over 22% in Hit@k and 46% in Recall@k for file- and function-level localization. It generalizes across models (GPT-4o, GPT-4o-mini, GPT-4.1) with Hit@1 and Recall@10 gains up to 504% and 376%, respectively. Ablation studies and manual evaluation confirm the effectiveness and reliability of the constructed concerns.