🤖 AI Summary
This work addresses hallucination in code summarization—i.e., LLM-generated summaries that deviate from the actual intent of source code—a critical challenge for trustworthy code understanding. We propose ETF (Entity Tracking Framework), the first dedicated framework for hallucination detection in this task. ETF integrates static program analysis to extract code entities, LLM-driven entity-summary mapping, and intent consistency verification. We further construct CodeHallu, the first large-scale (10K+ samples), human-verified benchmark specifically designed for code summarization hallucination detection. On our benchmark, ETF achieves an F1 score of 0.73, significantly outperforming existing methods. Its core innovation lies in enabling *interpretable hallucination溯源*: precisely localizing inaccurate entities in summaries and identifying their corresponding evidence in source code. This work establishes a new paradigm and practical toolkit for assessing the faithfulness of code understanding systems.
📝 Abstract
Recent advancements in large language models (LLMs) have significantly enhanced their ability to understand both natural language and code, driving their use in tasks like natural language-to-code (NL2Code) and code summarization. However, LLMs are prone to hallucination-outputs that stray from intended meanings. Detecting hallucinations in code summarization is especially difficult due to the complex interplay between programming and natural languages. We introduce a first-of-its-kind dataset with $sim$10K samples, curated specifically for hallucination detection in code summarization. We further propose a novel Entity Tracing Framework (ETF) that a) utilizes static program analysis to identify code entities from the program and b) uses LLMs to map and verify these entities and their intents within generated code summaries. Our experimental analysis demonstrates the effectiveness of the framework, leading to a 0.73 F1 score. This approach provides an interpretable method for detecting hallucinations by grounding entities, allowing us to evaluate summary accuracy.