SpecEval: Evaluating Code Comprehension in Large Language Models via Program Specifications

📅 2024-09-19
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing code understanding evaluation frameworks are limited to single-input reasoning tasks and narrow execution trace coverage, failing to assess large language models’ (LLMs) deep semantic comprehension of programs. Method: We propose the first black-box evaluation framework grounded in formal program specifications, centered on specifications that provably cover all execution traces. It comprises four progressively sophisticated specification-understanding tasks—from basic to advanced—and introduces counterfactual perturbation generation alongside contrastive evaluation to rigorously test model robustness to semantics-preserving transformations. Contribution/Results: Extensive evaluation across six state-of-the-art code LLMs reveals pervasive deficiencies in specification understanding and heightened sensitivity to semantically invariant perturbations—indicating fundamental flaws in their underlying semantic representations.

Technology Category

Application Category

📝 Abstract
Large Language models have achieved impressive performance in automated software engineering. Extensive efforts have been made to evaluate the abilities of code LLMs in various aspects, with an increasing number of benchmarks and evaluation frameworks proposed. Apart from the most sought-after capability of code generation, the capability of code comprehension is being granted growing attention. Nevertheless, existing works assessing the code comprehension capability of LLMs exhibit varied limitations. Evaluation frameworks like CRUXEval and REval usually focus on code reasoning tasks over a certain input case, leading to a limited range of execution traces covered, resulting in a loss in code semantics examined and the inability to assess the comprehensive understanding of LLMs concerning the target program. To tackle these challenges, we propose SpecEval, a novel black-box evaluation framework to evaluate code comprehension in LLMs via program specifications. Inspired by the idea that specifications can act as a comprehensive articulation of program behaviors concerning all possible execution traces, we employ formalized program specifications to represent program semantics and perform comprehensive evaluations. In particular, four specification-related tasks are designed meticulously to assess the capability of LLMs from basic to advanced levels. Counterfactual analysis is further conducted to study the performance variance of LLMs under semantics-preserving perturbations. Systematic experiments are conducted on six state-of-the-art LLMs. Extensive experimental results present a below-satisfactory performance of LLMs on specification-related tasks, revealing the limitations of existing LLMs in terms of articulating program semantics with formal specifications. Counterfactual analysis also reveals the sensitivity of LLMs towards semantic-preserving perturbations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating code comprehension in LLMs via program specifications
Assessing LLMs' ability to understand program semantics comprehensively
Analyzing LLMs' sensitivity to semantics-preserving code perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses formalized program specifications for evaluation
Designs four specification-related tasks for LLMs
Conducts counterfactual analysis on semantic perturbations
🔎 Similar Papers
No similar papers found.
L
Lezhi Ma
Nanjing University, China
Shangqing Liu
Shangqing Liu
Nanjing University
Software EngineeringDeep Learning
Lei Bu
Lei Bu
Nanjing University
Model CheckingHybrid SystemCyber-Physical SystemFormal Verification
S
Shangru Li
Nanjing University, China
Y
Yida Wang
Nanjing University, China
Y
Yang Liu
Nanyang Technological University, Singapore