đ¤ AI Summary
Large language models (LLMs) excel at code generation but exhibit limited generalization in code quality analysisâparticularly for normative compliance checkingâdue to challenges in adapting to dynamically evolving programming practices. This work proposes MetaLint, a framework that leverages instruction tuning augmented with synthetically generated linter data to accurately identify and localize semantically violating code idioms. It introduces a âcurriculum learningâ strategyâprogressing from easy to difficult examplesâto enable zero-shot transfer to unseen coding standards (e.g., PEP guidelines), without requiring model retraining. This design significantly enhances scalability and adaptability to novel code patterns. Experimental results demonstrate that MetaLint achieves an F-score of 70.37%, recall of 70.43%, and localization accuracy of 26.73% on unseen PEP idiom detectionâperformance competitive with substantially larger state-of-the-art models.
đ Abstract
Large Language Models, though successful in code generation, struggle with code quality analysis because they are limited by static training data and can't easily adapt to evolving best practices. We introduce MetaLint, a new instruction-following framework that formulates code quality analysis as the task of detecting and fixing problematic semantic code fragments or code idioms based on high-level specifications. Unlike conventional approaches that train models on static, rule-based data, MetaLint employs instruction tuning on synthetic linter-generated data to support easy-to-hard generalization, enabling models to adapt to novel or complex code patterns without retraining. To evaluate this, we construct a benchmark of challenging idioms inspired by real-world coding standards such as Python Enhancement Proposals (PEPs) and assess whether MetaLint-trained models reason adaptively or simply memorize. Our results show that MetaLint improves generalization to unseen PEP idioms, achieving a 70.37% F-score on idiom detection with the highest recall (70.43%) among all evaluated models. It also achieves 26.73% on localization, competitive for its 4B parameter size and comparable to larger state-of-the-art models like o3-mini, highlighting its potential for future-proof code quality analysis.