Leveraging LLMs for Semantic Conflict Detection via Unit Test Generation

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional unit test generation tools (e.g., Randoop, EvoSuite) suffer from high false-negative rates in detecting semantic conflicts during merge-aware testing (SMAT). To address this, this paper introduces Code Llama 70B—a large language model (LLM)—to SMAT for the first time, proposing an LLM-based semantic conflict detection method. We design multi-strategy prompt engineering and fine-tune inference parameters to significantly improve semantic coverage of generated test cases; these tests are then fed into the SMAT framework for differential testing, enabling precise identification of subtle behavioral inconsistencies introduced by code merges. Evaluation across multiple complex, real-world systems demonstrates that our approach substantially reduces false negatives and increases detection recall. Although computationally more expensive, it outperforms existing tools. The core contribution lies in the deep synergy between LLM-driven test generation and semantic differential analysis, establishing a novel paradigm for LLM-augmented quality assurance in software evolution.

Technology Category

Application Category

📝 Abstract
Semantic conflicts arise when a developer introduces changes to a codebase that unintentionally affect the behavior of changes integrated in parallel by other developers. Traditional merge tools are unable to detect such conflicts, so complementary tools like SMAT have been proposed. SMAT relies on generating and executing unit tests: if a test fails on the base version, passes on a developer's modified version, but fails again after merging with another developer's changes, a semantic conflict is indicated. While SMAT is effective at detecting conflicts, it suffers from a high rate of false negatives, partly due to the limitations of unit test generation tools such as Randoop and Evosuite. To investigate whether large language models (LLMs) can overcome these limitations, we propose and integrate a new test generation tool based on Code Llama 70B into SMAT. We explore the model's ability to generate tests using different interaction strategies, prompt contents, and parameter configurations. Our evaluation uses two samples: a benchmark with simpler systems from related work, and a more significant sample based on complex, real-world systems. We assess the effectiveness of the new SMAT extension in detecting conflicts. Results indicate that, although LLM-based test generation remains challenging and computationally expensive in complex scenarios, there is promising potential for improving semantic conflict detection. -- Conflitos sem^anticos surgem quando um desenvolvedor introduz mudanças em uma base de código que afetam, de forma n~ao intencional, o comportamento de alteraç~oes integradas em paralelo por outros desenvolvedores. Ferramentas tradicionais de merge n~ao conseguem detectar esse tipo de conflito, por isso ferramentas complementares como o SMAT foram propostas. O SMAT depende da geraç~ao e execuç~ao de testes de unidade: se um teste falha na vers~ao base, passa na vers~ao modificada por um desenvolvedor, mas volta a falhar após o merge com as mudanças de outro desenvolvedor, um conflito sem^antico é identificado. Embora o SMAT seja eficaz na detecç~ao de conflitos, apresenta alta taxa de falsos negativos, em parte devido às limitaç~oes das ferramentas de geraç~ao de testes como Randoop e Evosuite. Para investigar se modelos de linguagem de grande porte (LLMs) podem superar essas limitaç~oes, propomos e integramos ao SMAT uma nova ferramenta de geraç~ao de testes baseada no Code Llama 70B. Exploramos a capacidade do modelo de gerar testes utilizando diferentes estratégias de interaç~ao, conteúdos de prompts e configuraç~oes de par^ametros. Nossa avaliaç~ao utiliza duas amostras: um benchmark com sistemas mais simples, usados em trabalhos relacionados, e uma amostra mais significativa baseada em sistemas complexos e reais. Avaliamos a eficácia da nova extens~ao do SMAT na detecç~ao de conflitos. Os resultados indicam que, embora a geraç~ao de testes por LLM em cenários complexos ainda seja desafiadora e custosa computacionalmente, há potencial promissor para aprimorar a detecç~ao de conflitos sem^anticos.
Problem

Research questions and friction points this paper is trying to address.

Detect semantic conflicts in parallel code changes via unit tests
Overcome high false negatives in SMAT with LLM-based test generation
Evaluate LLM effectiveness in complex real-world systems for conflict detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Code Llama 70B for test generation
Integrates LLM-based tool into SMAT
Explores prompt strategies for conflict detection
🔎 Similar Papers
No similar papers found.
N
Nathalia Barbosa
Centro de Informática, Universidade Federal de Pernambuco, Brasil
Paulo Borba
Paulo Borba
Federal University of Pernambuco
Software EngineeringProgramming Languages
L
Léuson Da Silva
Polytechnique Montreal, Canadá