CIDRe: A Reference-Free Multi-Aspect Criterion for Code Comment Quality Measurement

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code comment quality evaluation methods (e.g., SIDE, MIDQ, STASIS) rely on reference texts, language-specific assumptions, or single-dimensional metrics, limiting their utility for high-quality dataset construction. To address this, we propose CIDRe—a language-agnostic, reference-free, multidimensional evaluation framework that quantifies comment quality along four synergistic dimensions: relevance, informativeness, completeness, and descriptive length. CIDRe is the first to integrate four parameter-free metrics: semantic alignment (via embedding-based similarity), functional coverage (via template matching), structural completeness (via information-entropy-driven coverage estimation), and detail sufficiency (via length-adaptive normalization). Evaluated on human-annotated benchmarks, CIDRe significantly outperforms baseline methods—achieving substantial gains in cross-entropy assessment—and GPT-4o-mini–based human evaluation confirms its ability to reliably detect quality improvements in fine-tuned model outputs (p < 0.01).

Technology Category

Application Category

📝 Abstract
Effective generation of structured code comments requires robust quality metrics for dataset curation, yet existing approaches (SIDE, MIDQ, STASIS) suffer from limited code-comment analysis. We propose CIDRe, a language-agnostic reference-free quality criterion combining four synergistic aspects: (1) relevance (code-comment semantic alignment), (2) informativeness (functional coverage), (3) completeness (presence of all structure sections), and (4) description length (detail sufficiency). We validate our criterion on a manually annotated dataset. Experiments demonstrate CIDRe's superiority over existing metrics, achieving improvement in cross-entropy evaluation. When applied to filter comments, the models finetuned on CIDRe-filtered data show statistically significant quality gains in GPT-4o-mini assessments.
Problem

Research questions and friction points this paper is trying to address.

Lack robust metrics for code comment quality measurement
Existing approaches analyze code-comment limitedly
Need language-agnostic reference-free quality criterion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-agnostic reference-free quality criterion
Combines four synergistic code-comment aspects
Validated on manually annotated dataset
🔎 Similar Papers
No similar papers found.