🤖 AI Summary
Existing benchmarks predominantly evaluate end-to-end tasks like code generation, lacking fine-grained assessment of large language models’ (LLMs) semantic reasoning capabilities for program analysis.
Method: We introduce CoRe, the first benchmark targeting foundational static analysis competencies—covering data dependencies, control dependencies, and information flow in C/C++, Java, and Python. It comprises 12,553 human-verified tasks. Our novel semantic-aware diversity sampling strategy selects targets based on structural coverage and dependency depth, enhancing both semantic diversity and reasoning complexity.
Contribution/Results: Evaluating 10 mainstream LLMs reveals competent performance on basic dependency identification but sharp degradation on multi-step reasoning, reverse dependencies, and complex control structures—exposing critical bottlenecks in current LLMs’ code understanding. CoRe thus provides a rigorous, semantics-oriented evaluation framework to advance program reasoning research.
📝 Abstract
Large language models (LLMs) have been widely adopted across diverse software engineering domains, such as code generation, program repair, and vulnerability detection. These applications require understanding beyond surface-level code patterns: value propagation, control flow, and interdependence between program elements. However, existing benchmarks primarily evaluate end-to-end outcomes, such as whether code is correctly repaired or generated, leaving the models ability for program semantic reasoning underexplored. This work presents CoRe, a high-quality, human-verified benchmark designed to evaluate LLMs on fundamental static analysis tasks. CoRe includes 12,553 task instances spanning data dependency, control dependency, and information flow across programs written in C/C++, Java, and Python. To ensure semantic diversity and reasoning complexity, we propose a semantics-aware diverse sampling strategy that selects targets and task instances based on structural coverage and dependency depth. We evaluate 10 mainstream LLMs and show that, while they perform well at identifying dependencies, models still struggle with tasks that require deeper semantic understanding and multi-step reasoning. We further conduct qualitative analyses to uncover key challenges, such as complex control structures and backward dependency patterns, offering insights into improving LLMs code reasoning capabilities.