🤖 AI Summary
This work addresses the challenges of inaccurate recognition of educational objects and weak generalization in image captioning models within early childhood education, primarily caused by scarce domain-specific data and limitations in training paradigms. To this end, the authors introduce ECAC, a large-scale benchmark dataset comprising over 250,000 real-world images, and propose RSRS, a hybrid training framework that dynamically alternates between reinforcement learning and supervised fine-tuning to mitigate reward collapse and enhance fine-grained description of teaching aids. Furthermore, they establish TTS, the first evaluation protocol tailored for early education contexts. Their multimodal large language model, KinderMM-Cap-3B, achieves a TTS score of 51.06, significantly outperforming existing approaches while maintaining high-quality generation and demonstrating strong potential for educational applications.
📝 Abstract
Image captioning for Early Childhood Education (ECE) is essential for automated activity understanding and educational assessment. However, existing methods face two key challenges. First, the lack of large-scale, domain-specific datasets limits the model's ability to capture fine-grained semantic concepts unique to ECE scenarios, resulting in generic and imprecise descriptions. Second, conventional training paradigms exhibit limitations in enhancing professional object description capability, as supervised learning tends to favor high-frequency expressions, while reinforcement learning may suffer from unstable optimization on difficult samples.
To address these limitations, we introduce ECAC, a large-scale benchmark for ECE daily activity image captioning, comprising 256,121 real-world images annotated with expert-level captions and fine-grained labels. ECAC is further equipped with a domain-oriented evaluation protocol, the Teaching Toy Recognition Score (TTS), to explicitly measure professional object naming accuracy. Furthermore, we propose RSRS (Reward-Conditional Switch of Reinforcement Learning and Supervised Fine-Tuning), a hybrid training framework that dynamically alternates between RL and supervised optimization. By rerouting hard samples with zero rewards to supervised fine-tuning, RSRS effectively mitigates advantage collapse and enables stable optimization for fine-grained recognition. Leveraging ECAC and RSRS, we develop KinderMM-Cap-3B, a domain-adapted multimodal large language model. Extensive experiments demonstrate that our model achieves a TTS of 51.06, substantially outperforming state-of-the-art baselines while maintaining superior caption quality, highlighting its potential for specialized educational applications.