ETF: An Entity Tracing Framework for Hallucination Detection in Code Summaries

📅 2024-10-17
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses hallucination in code summarization—i.e., LLM-generated summaries that deviate from the actual intent of source code—a critical challenge for trustworthy code understanding. We propose ETF (Entity Tracking Framework), the first dedicated framework for hallucination detection in this task. ETF integrates static program analysis to extract code entities, LLM-driven entity-summary mapping, and intent consistency verification. We further construct CodeHallu, the first large-scale (10K+ samples), human-verified benchmark specifically designed for code summarization hallucination detection. On our benchmark, ETF achieves an F1 score of 0.73, significantly outperforming existing methods. Its core innovation lies in enabling *interpretable hallucination溯源*: precisely localizing inaccurate entities in summaries and identifying their corresponding evidence in source code. This work establishes a new paradigm and practical toolkit for assessing the faithfulness of code understanding systems.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) have significantly enhanced their ability to understand both natural language and code, driving their use in tasks like natural language-to-code (NL2Code) and code summarization. However, LLMs are prone to hallucination-outputs that stray from intended meanings. Detecting hallucinations in code summarization is especially difficult due to the complex interplay between programming and natural languages. We introduce a first-of-its-kind dataset with $sim$10K samples, curated specifically for hallucination detection in code summarization. We further propose a novel Entity Tracing Framework (ETF) that a) utilizes static program analysis to identify code entities from the program and b) uses LLMs to map and verify these entities and their intents within generated code summaries. Our experimental analysis demonstrates the effectiveness of the framework, leading to a 0.73 F1 score. This approach provides an interpretable method for detecting hallucinations by grounding entities, allowing us to evaluate summary accuracy.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in code summaries generated by LLMs
Traces entities from summary to code for accuracy verification
Addresses complex interplay between programming and natural languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static program analysis identifies code entities
LLMs map and verify entities in summaries
Tracing entities detects hallucinations with 73% F1
🔎 Similar Papers
No similar papers found.
K
Kishan Maharaj
Indian Institute of Technology Bombay, Mumbai, India
Vitobha Munigala
Vitobha Munigala
IBM Research India
S
Srikanth G. Tamilselvam
IBM Research India
Prince Kumar
Prince Kumar
IBM Research Labs
NLPMLDL
Sayandeep Sen
Sayandeep Sen
Researcher, IBM Research India
eBPFMobile and Wireless NetworkingIoTBlockchain
P
Palani Kodeswaran
IBM Research India
Abhijit Mishra
Abhijit Mishra
Assistant Professor of Practice, iSchool, University of Texas at Austin
Machine LearningNatural Language ProcessingCognitive ScienceEye-Tracking
P
Pushpak Bhattacharyya
Indian Institute of Technology Bombay, Mumbai, India