🤖 AI Summary
This paper addresses the lack of systematic, quantitative evaluation of large language models’ (LLMs) code understanding capabilities by proposing the first deep-understanding evaluation framework grounded in fault localization performance. Methodologically, it innovatively integrates mutation testing principles, constructing 575,000 debugging tasks via real-fault injection and semantics-preserving mutation (SPM) across Java and Python. Contributions include: (1) establishing a novel, robustness-oriented quantification paradigm that circumvents limitations of generative benchmarks; (2) revealing fundamental deficiencies in LLMs’ code understanding—namely, shallowness, positional bias (stronger comprehension of leading code segments), and semantic insensitivity; and (3) empirically demonstrating that LLMs fail to reproduce original fault localization in 81% of SPM cases, confirming their heavy reliance on lexical/syntactic rather than semantic features.
📝 Abstract
Large Language Models (LLMs) are increasingly used in post-development tasks such as code repair and testing. A key factor in these tasks' success is the model's deep understanding of code. However, the extent to which LLMs truly understand code remains largely unevaluated. Quantifying code comprehension is challenging due to its abstract nature and the lack of a standardized metric. Previously, this was assessed through developer surveys, which are not feasible for evaluating LLMs. Existing LLM benchmarks focus primarily on code generation, fundamentally different from code comprehension. Additionally, fixed benchmarks quickly become obsolete as they become part of the training data. This paper presents the first large-scale empirical investigation into LLMs' ability to understand code. Inspired by mutation testing, we use an LLM's fault-finding ability as a proxy for its deep code understanding. This approach is based on the insight that a model capable of identifying subtle functional discrepancies must understand the code well. We inject faults in real-world programs and ask the LLM to localize them, ensuring the specifications suffice for fault localization. Next, we apply semantic-preserving code mutations (SPMs) to the faulty programs and test whether the LLMs still locate the faults, verifying their confidence in code understanding. We evaluate nine popular LLMs on 575000 debugging tasks from 670 Java and 637 Python programs. We find that LLMs lose the ability to debug the same bug in 81% of faulty programs when SPMs are applied, indicating a shallow understanding of code and reliance on features irrelevant to semantics. We also find that LLMs understand code earlier in the program better than later. This suggests that LLMs' code comprehension remains tied to lexical and syntactic features due to tokenization designed for natural languages, which overlooks code semantics.