CETBench: A Novel Dataset constructed via Transformations over Programs for Benchmarking LLMs for Code-Equivalence Checking

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work evaluates large language models’ (LLMs) capability in code functional equivalence detection—a critical task for assessing their performance on semantics-preserving code transformations such as rewriting and translation. To this end, we introduce CETBench, the first controllable and scalable benchmark for this task. CETBench employs a program-transformation-based paradigm: leveraging static analysis and predefined semantics-preserving (e.g., variable renaming, control-flow restructuring) or semantics-breaking transformations, it systematically generates high-quality equivalent and non-equivalent code pairs. Experiments reveal that state-of-the-art LLMs exhibit high sensitivity to subtle semantics-preserving modifications, exposing fundamental limitations in deep semantic code understanding; their accuracy drops significantly on transformed samples. We further propose a lightweight supervised fine-tuning approach that substantially improves discrimination accuracy across diverse models, demonstrating both generalizability and practical utility.

Technology Category

Application Category

📝 Abstract
LLMs have been extensively used for the task of automated code generation. In this work, we examine the applicability of LLMs for the related but relatively unexplored task of code-equivalence checking, i.e., given two programs, whether they are functionally equivalent or not. This is an important problem since benchmarking code equivalence can play a critical role in evaluating LLM capabilities for tasks such as code re-writing and code translation. Towards this end, we present CETBench - Code Equivalence with Transformations Benchmark, constructed via a repository of programs, where two programs in the repository may be solving the same or different tasks. Each instance in our dataset is obtained by taking a pair of programs in the repository and applying a random series of pre-defined code transformations, resulting in (non-)equivalent pairs. Our analysis on this dataset reveals a surprising finding that very simple code transformations in the underlying pair of programs can result in a significant drop in performance of SOTA LLMs for the task of code-equivalence checking. To remedy this, we present a simple fine-tuning-based approach to boost LLM performance on the transformed pairs of programs. Our approach for dataset generation is generic, and can be used with repositories with varying program difficulty levels and allows for applying varying numbers as well as kinds of transformations. In our experiments, we perform ablations over the difficulty level of original programs, as well as the kind of transformations used in generating pairs for equivalence checking. Our analysis presents deep insights into the working of LLMs for the task of code-equivalence, and points to the fact that they may still be far from what could be termed as a semantic understanding of the underlying code.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking LLMs for code-equivalence checking using transformed programs
Investigating LLM performance drop due to simple code transformations
Proposing fine-tuning to improve LLM accuracy on equivalence tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs dataset via random code transformations
Uses fine-tuning to improve LLM performance
Generic approach for varying program difficulties
🔎 Similar Papers
No similar papers found.