Beyond Memorization: Evaluating the True Type Inference Capabilities of LLMs for Java Code Snippets

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) possess genuine semantic understanding in Java type inference—or instead rely on data leakage from the widely used StatType-SO benchmark present in their training corpora. Method: The authors introduce ThaliaType, a novel, training-free benchmark, and propose semantic-preserving code transformations alongside delta-debugging analysis to isolate memorization from reasoning. Contribution/Results: Empirical evaluation reveals that state-of-the-art LLMs suffer a 59% drop in precision and a 72% drop in recall on ThaliaType compared to StatType-SO. Furthermore, their ability to infer fully qualified names (FQNs) strongly correlates with training-set exposure rather than static analysis capabilities, confirming reliance on memorized patterns. This work exposes critical biases in current evaluation practices and establishes ThaliaType—alongside its associated methodology—as a rigorous, semantics-aware benchmark for assessing true code understanding in LLMs.

Technology Category

Application Category

📝 Abstract
Type inference is a crucial task for reusing online code snippets, often found on platforms like StackOverflow, which frequently lack essential type information such as fully qualified names (FQNs) and required libraries. Recent studies have leveraged Large Language Models (LLMs) for type inference on code snippets, showing promising results. However, these results are potentially affected by data leakage, as the benchmark suite (StatType-SO) has been public on GitHub since 2017 (full suite in 2023). Thus, it is uncertain whether LLMs' strong performance reflects genuine code semantics understanding or a mere retrieval of ground truth from training data. To comprehensively assess LLMs' type inference capabilities on Java code snippets, we conducted a three-pronged evaluation. First, utilizing Thalia, a program synthesis technique, we created ThaliaType--a new, unseen dataset for type inference evaluation. On unseen snippets, LLM performance dropped significantly, with up to a 59% decrease in precision and 72% in recall. Second, we developed semantic-preserving transformations that significantly degraded LLMs' type inference performance, revealing weaknesses in understanding code semantics. Third, we used delta debugging to identify the minimal syntax elements sufficient for LLM inference. While type inference primarily involves inferring FQNs for types in the code snippet, LLMs correctly infer FQNs even when the types were absent from the snippets, suggesting a reliance on knowledge from training instead of thoroughly analyzing the snippets. Our findings indicate that LLMs' strong past performance likely stemmed from data leakage, rather than a genuine understanding of the semantics of code snippets. Our findings highlight the crucial need for carefully designed benchmarks using unseen code snippets to assess the true capabilities of LLMs for type inference tasks.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' true type inference capabilities for Java code snippets.
Evaluating if LLMs understand code semantics or rely on training data.
Creating unseen datasets to test LLMs' performance on type inference.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created ThaliaType dataset for unseen code evaluation
Applied semantic-preserving transformations to test LLMs
Used delta debugging to identify minimal syntax elements
🔎 Similar Papers
No similar papers found.