Inside Out: Uncovering How Comment Internalization Steers LLMs for Better or Worse

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) internalize source code comments—Javadoc, inline, and block comments—as identifiable and intervenable implicit concepts, and quantifies their causal impact on software engineering (SE) task performance. Method: We propose the first concept-level interpretability framework for SE, grounded in Concept Activation Vectors (CAVs), to construct comment concept representations; causal interventions are performed via activation or suppression in the embedding space. We conduct controlled, cross-task, multi-model experiments. Contribution/Results: We demonstrate that comments constitute independent, manipulable latent concepts whose activation strength exhibits significant task-dependent sensitivity: code summarization is most sensitive (performance modulation range: −90% to +67%), while code completion is least sensitive. Our framework enables precise, concept-driven analysis and intervention, establishing a novel paradigm for interpretable, controllable LLM design and optimization in software engineering.

Technology Category

Application Category

📝 Abstract
While comments are non-functional elements of source code, Large Language Models (LLM) frequently rely on them to perform Software Engineering (SE) tasks. Yet, where in the model this reliance resides, and how it affects performance, remains poorly understood. We present the first concept-level interpretability study of LLMs in SE, analyzing three tasks - code completion, translation, and refinement - through the lens of internal comment representation. Using Concept Activation Vectors (CAV), we show that LLMs not only internalize comments as distinct latent concepts but also differentiate between subtypes such as Javadocs, inline, and multiline comments. By systematically activating and deactivating these concepts in the LLMs' embedding space, we observed significant, model-specific, and task-dependent shifts in performance ranging from -90% to +67%. Finally, we conducted a controlled experiment using the same set of code inputs, prompting LLMs to perform 10 distinct SE tasks while measuring the activation of the comment concept within their latent representations. We found that code summarization consistently triggered the strongest activation of comment concepts, whereas code completion elicited the weakest sensitivity. These results open a new direction for building SE tools and models that reason about and manipulate internal concept representations rather than relying solely on surface-level input.
Problem

Research questions and friction points this paper is trying to address.

Analyzes how LLMs internalize code comments as latent concepts.
Investigates comment reliance impact on LLM performance in software engineering.
Explores concept manipulation to improve SE tools beyond surface-level inputs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using Concept Activation Vectors to analyze comment internalization
Systematically activating and deactivating comment concepts in embedding space
Measuring comment concept activation across multiple software engineering tasks
🔎 Similar Papers
2024-10-03International Conference on Learning RepresentationsCitations: 28