MetaLint: Generalizable Idiomatic Code Quality Analysis through Instruction-Following and Easy-to-Hard Generalization

📅 2025-07-15
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) excel at code generation but exhibit limited generalization in code quality analysis—particularly for normative compliance checking—due to challenges in adapting to dynamically evolving programming practices. This work proposes MetaLint, a framework that leverages instruction tuning augmented with synthetically generated linter data to accurately identify and localize semantically violating code idioms. It introduces a “curriculum learning” strategy—progressing from easy to difficult examples—to enable zero-shot transfer to unseen coding standards (e.g., PEP guidelines), without requiring model retraining. This design significantly enhances scalability and adaptability to novel code patterns. Experimental results demonstrate that MetaLint achieves an F-score of 70.37%, recall of 70.43%, and localization accuracy of 26.73% on unseen PEP idiom detection—performance competitive with substantially larger state-of-the-art models.

Technology Category

Application Category

📝 Abstract
Large Language Models, though successful in code generation, struggle with code quality analysis because they are limited by static training data and can't easily adapt to evolving best practices. We introduce MetaLint, a new instruction-following framework that formulates code quality analysis as the task of detecting and fixing problematic semantic code fragments or code idioms based on high-level specifications. Unlike conventional approaches that train models on static, rule-based data, MetaLint employs instruction tuning on synthetic linter-generated data to support easy-to-hard generalization, enabling models to adapt to novel or complex code patterns without retraining. To evaluate this, we construct a benchmark of challenging idioms inspired by real-world coding standards such as Python Enhancement Proposals (PEPs) and assess whether MetaLint-trained models reason adaptively or simply memorize. Our results show that MetaLint improves generalization to unseen PEP idioms, achieving a 70.37% F-score on idiom detection with the highest recall (70.43%) among all evaluated models. It also achieves 26.73% on localization, competitive for its 4B parameter size and comparable to larger state-of-the-art models like o3-mini, highlighting its potential for future-proof code quality analysis.
Problem

Research questions and friction points this paper is trying to address.

Improving code quality analysis with adaptable instruction-following models
Detecting and fixing problematic code idioms via high-level specifications
Enhancing generalization to unseen coding standards without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction-following framework for code quality analysis
Easy-to-hard generalization with synthetic linter data
Adapts to novel code patterns without retraining
🔎 Similar Papers
No similar papers found.
Atharva Naik
Atharva Naik
PhD Student, Carnegie Mellon University
LLM4CodeLLM ReasoningAlignment
L
Lawanya Baghel
Carnegie Mellon University
D
Dhakshin Govindarajan
Carnegie Mellon University
D
Darsh Agrawal
Carnegie Mellon University
Daniel Fried
Daniel Fried
Carnegie Mellon University
Natural Language ProcessingMachine Learning
C
Carolyn Rose
Carnegie Mellon University