From Imitation to Intuition: Intrinsic Reasoning for Open-Instance Video Classification

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of large intra-class variation and limited generalization in open-vocabulary video classification by introducing DeepIntuit, a novel framework that incorporates intrinsic reasoning into the task for the first time. The approach employs a three-stage training paradigm—cold-start supervised alignment, group-wise relative policy optimization (GRPO) via reinforcement learning, and intuition calibration—to shift visual-language models from superficial feature imitation toward semantic, intuition-driven reasoning. Notably, the intuition calibration stage effectively mitigates distributional mismatch during knowledge transfer. Experimental results demonstrate that DeepIntuit substantially outperforms existing methods, confirming the efficacy of intrinsic reasoning in enhancing model generalization and robustness.

Technology Category

Application Category

📝 Abstract
Conventional video classification models, acting as effective imitators, excel in scenarios with homogeneous data distributions. However, real-world applications often present an open-instance challenge, where intra-class variations are vast and complex, beyond existing benchmarks. While traditional video encoder models struggle to fit these diverse distributions, vision-language models (VLMs) offer superior generalization but have not fully leveraged their reasoning capabilities (intuition) for such tasks. In this paper, we bridge this gap with an intrinsic reasoning framework that evolves open-instance video classification from imitation to intuition. Our approach, namely DeepIntuit, begins with a cold-start supervised alignment to initialize reasoning capability, followed by refinement using Group Relative Policy Optimization (GRPO) to enhance reasoning coherence through reinforcement learning. Crucially, to translate this reasoning into accurate classification, DeepIntuit then introduces an intuitive calibration stage. In this stage, a classifier is trained on this intrinsic reasoning traces generated by the refined VLM, ensuring stable knowledge transfer without distribution mismatch. Extensive experiments demonstrate that for open-instance video classification, DeepIntuit benefits significantly from transcending simple feature imitation and evolving toward intrinsic reasoning. Our project is available at https://bwgzk-keke.github.io/DeepIntuit/.
Problem

Research questions and friction points this paper is trying to address.

open-instance video classification
intrinsic reasoning
vision-language models
intra-class variation
video classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

intrinsic reasoning
vision-language models
open-instance video classification
Group Relative Policy Optimization
intuitive calibration