BLAZE: Cross-Language and Cross-Project Bug Localization via Dynamic Chunking and Hard Example Learning

πŸ“… 2024-07-24
πŸ›οΈ IEEE Transactions on Software Engineering
πŸ“ˆ Citations: 2
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Existing defect localization tools suffer from poor cross-project generalization and inadequate multilingual support; while large language models (LLMs) offer superior semantic representation, their effectiveness is constrained by limited context windows and imprecise defect-code alignment. To address these challenges, we propose a dynamic semantic chunking strategy that preserves code continuity while mitigating truncation. We further design a hard-sample-driven adversarial fine-tuning paradigm to enhance precise defect-code mapping under context limitations. Additionally, we introduce BEETLEBOXβ€”the first multilingual benchmark covering 5 programming languages and 29 active open-source projects. Built upon the GPT architecture, our approach integrates dynamic sliding-window chunking, hard-example mining, and AST-aware embeddings. Evaluated on BEETLEBOX and SWE-Bench, it achieves 120% improvement in Top-1 accuracy, 144% in Mean Average Precision (MAP), and 100% in Mean Reciprocal Rank (MRR), consistently outperforming six state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
Software bugs require developers to exert significant effort to identify and resolve them, often consuming about one-third of their time. Bug localization, the process of pinpointing the exact source code files that need modification, is crucial in reducing this effort. Existing bug localization tools, typically reliant on deep learning techniques, face limitations in cross-project applicability and effectiveness in multi-language environments. Recent advancements with Large Language Models (LLMs) offer detailed representations for bug localization. However, they encounter challenges with limited context windows and mapping accuracy. To address these issues, we propose BLAZE, an approach that employs dynamic chunking and hard example learning. First, BLAZE dynamically segments source code to minimize continuity loss. Then, BLAZE fine-tunes a GPT-based model using challenging bug cases, in order to enhance cross-project and cross-language bug localization. To support the capability of BLAZE, we create the BEETLEBOX dataset, which comprises 26,321 bugs from 29 large and thriving open-source projects across five different programming languages (Java, C++, Python, Go, and JavaScript). Our evaluations of BLAZE on three benchmark datasets BEETLEBOX, SWE-Bench, and Ye et al. demonstrate substantial improvements compared to six state-of-the-art baselines. Specifically, BLAZE achieves up to an increase of 120% in Top 1 accuracy, 144% in Mean Average Precision (MAP), and 100% in Mean Reciprocal Rank (MRR). An extensive ablation study confirms the contributions of our pipeline components to the overall performance enhancement.
Problem

Research questions and friction points this paper is trying to address.

Improves cross-project bug localization accuracy
Enhances multi-language bug detection capabilities
Addresses limited context windows in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic chunking minimizes source code continuity loss
Hard example learning fine-tunes GPT for bugs
BEETLEBOX dataset supports cross-language bug localization
πŸ”Ž Similar Papers
No similar papers found.