Do Code LLMs Do Static Analysis?

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) possess human-like static analysis capabilities—such as call graph construction, abstract syntax tree (AST) parsing, and data-flow analysis—and how such capabilities support downstream code intelligence tasks (e.g., code generation, summarization, translation). Method: We introduce the first systematic, cross-model (Gemini, GPT-4o, CodeLlama, Jam) and cross-task (three static analysis vs. three code intelligence tasks) evaluation framework, employing standardized syntactic parsing and structured output assessment. Contribution/Results: All evaluated LLMs significantly underperform dedicated static analyzers; pretraining on static analysis tasks does not transfer to improved code intelligence performance. This study provides the first empirical evidence that LLMs’ internal program understanding mechanisms fundamentally differ from classical static analysis, challenging the implicit assumption that LLMs inherently emulate human-like program reasoning. Our findings critically inform the delineation of LLM capability boundaries and guide future modeling for code-aware foundation models.

Technology Category

Application Category

📝 Abstract
This paper investigates code LLMs' capability of static analysis during code intelligence tasks such as code summarization and generation. Code LLMs are now household names for their abilities to do some programming tasks that have heretofore required people. The process that people follow to do programming tasks has long been understood to require static analysis. For example, human programmers navigate the call graph of large programs to comprehend the different parts of those programs. Education in programming includes static analysis under the assumption that better static analysis skills beget better programming. Yet while popular culture is replete with anthropomorphic references such as LLM"reasoning", in fact code LLMs could exhibit a wholly alien thought process to humans. This paper studies the specific question of static analysis by code LLMs. We use three different static analysis tasks (callgraph generation, AST generation, and dataflow generation) and three different code intelligence tasks (code generation, summarization, and translation) with two different open-source models (Gemini and GPT-4o) and closed-source models (CodeLlaMA and Jam) as our experiments. We found that LLMs show poor performance on static analysis tasks and that pretraining on the static analysis tasks does not generalize to better performance on the code intelligence tasks.
Problem

Research questions and friction points this paper is trying to address.

Investigates code LLMs' static analysis capability in code tasks
Evaluates LLMs' performance on static analysis versus code intelligence
Tests if static analysis pretraining improves code task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs on static analysis tasks
Tests callgraph, AST, dataflow generation
Compares open and closed-source LLMs