🤖 AI Summary
This work addresses the limited generalization capability of multimodal large language models (MLLMs) in fine-grained visual recognition and their heavy reliance on extensive labeled data. To overcome these challenges, the authors propose Fine-R1, a novel framework that integrates fine-grained chain-of-thought (CoT) supervised fine-tuning with a triplet augmentation strategy. By leveraging both intra-class and inter-class trajectory enhancements, Fine-R1 significantly improves model discriminability and robustness for both seen and unseen subcategories—even under an extreme 4-shot setting. The approach effectively refines multimodal representations through structured reasoning and contrastive learning signals. Experimental results demonstrate that Fine-R1 outperforms current general-purpose and reasoning-oriented MLLMs on fine-grained recognition tasks and even surpasses specialized contrastive models such as CLIP, highlighting its exceptional few-shot generalization ability.
📝 Abstract
Any entity in the visual world can be hierarchically grouped based on shared characteristics and mapped to fine-grained sub-categories. While Multi-modal Large Language Models (MLLMs) achieve strong performance on coarse-grained visual tasks, they often struggle with Fine-Grained Visual Recognition (FGVR). Adapting general-purpose MLLMs to FGVR typically requires large amounts of annotated data, which is costly to obtain, leaving a substantial performance gap compared to contrastive CLIP models dedicated for discriminative tasks. Moreover, MLLMs tend to overfit to seen sub-categories and generalize poorly to unseen ones. To address these challenges, we propose Fine-R1, an MLLM tailored for FGVR through an R1-style training framework: (1) Chain-of-Thought Supervised Fine-tuning, where we construct a high-quality FGVR CoT dataset with rationales of"visual analysis, candidate sub-categories, comparison, and prediction", transition the model into a strong open-world classifier; and (2) Triplet Augmented Policy Optimization, where Intra-class Augmentation mixes trajectories from anchor and positive images within the same category to improve robustness to intra-class variance, while Inter-class Augmentation maximizes the response distinction conditioned on images across sub-categories to enhance discriminative ability. With only 4-shot training, Fine-R1 outperforms existing general MLLMs, reasoning MLLMs, and even contrastive CLIP models in identifying both seen and unseen sub-categories, showing promise in working in knowledge-intensive domains where gathering expert annotations for all sub-categories is arduous. Code is available at https://github.com/PKU-ICST-MIPL/FineR1_ICLR2026.