Personalized Code Readability Assessment: Are We There Yet?

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the neglect of developer subjectivity in code readability assessment by systematically investigating personalized readability modeling. We identify a critical bottleneck: approximately one-third of annotations in existing benchmark datasets exhibit internal inconsistencies, undermining data reliability for individualized modeling. Leveraging minimal per-developer annotations, we comparatively evaluate few-shot learning with large language models (LLMs) against feature-engineering-based classifiers (e.g., Random Forest); results show superior performance of the latter. Empirical analysis further reveals low inter-developer agreement and substantial intra-developer annotation variance. Our core contributions are threefold: (1) uncovering the fundamental role of subjectivity in annotation unreliability; (2) establishing high-fidelity, developer-specific annotations as a prerequisite for effective personalized readability modeling; and (3) advocating for the construction of trustworthy, developer-centered readability benchmark datasets.

Technology Category

Application Category

📝 Abstract
Unreadable code could be a breeding ground for errors. Thus, previous work defined approaches based on machine learning to automatically assess code readability that can warn developers when some code artifacts (e.g., classes) become unreadable. Given datasets of code snippets manually evaluated by several developers in terms of their perceived readability, such approaches (i) establish a snippet-level ground truth, and (ii) train a binary (readable/unreadable) or a ternary (readable/neutral/unreadable) code readability classifier. Given this procedure, all existing approaches neglect the subjectiveness of code readability, i.e., the possible different developer-specific nuances in the code readability perception. In this paper, we aim to understand to what extent it is possible to assess code readability as subjectively perceived by developers through a personalized code readability assessment approach. This problem is significantly more challenging than the snippet-level classification problem: We assume that, in a realistic scenario, a given developer is keen to provide only a few code readability evaluations, thus less data is available. For this reason, we adopt an LLM with few-shot learning to achieve our goal. Our results, however, show that such an approach achieves worse results than a state-of-the-art feature-based model that is trained to work at the snippet-level. We tried to understand why this happens by looking more closely at the quality of the available code readability datasets and assessed the consistency of the inter-developer evaluations. We observed that up to a third of the evaluations are self-contradictory. Our negative results call for new and more reliable code readability datasets.
Problem

Research questions and friction points this paper is trying to address.

Assessing code readability subjectively perceived by developers.
Challenges in personalized code readability with limited developer evaluations.
Need for more reliable datasets due to inconsistent developer evaluations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized code readability assessment using LLM
Few-shot learning for subjective readability evaluation
Analysis of dataset quality and developer consistency
🔎 Similar Papers
No similar papers found.