🤖 AI Summary
This work addresses the inefficiency and semantic ambiguity arising from the use of generative classifiers in multimodal large language models (MLLMs) for closed-set action understanding. It presents the first systematic comparison between generative and discriminative classification paradigms and introduces a novel "Generative-Assisted Discriminative" (GAD) classifier. GAD incorporates generative modeling solely during fine-tuning to enhance discriminative performance while preserving compatibility with pretraining, achieving superior accuracy and inference efficiency. Evaluated across four tasks on five benchmark datasets, the proposed method attains state-of-the-art results, improving average accuracy by 2.5% on the COIN benchmark and accelerating inference speed by a factor of three.
📝 Abstract
Multimodal Large Language Models (MLLMs) have advanced open-world action understanding and can be adapted as generative classifiers for closed-set settings by autoregressively generating action labels as text. However, this approach is inefficient, and shared subwords across action labels introduce semantic overlap, leading to ambiguity in generation. In contrast, discriminative classifiers learn task-specific representations with clear decision boundaries, enabling efficient one-step classification without autoregressive decoding. We first compare generative and discriminative classifiers with MLLMs for closed-set action understanding, revealing the superior accuracy and efficiency of the latter. To bridge the performance gap, we design strategies that elevate generative classifiers toward performance comparable with discriminative ones. Furthermore, we show that generative modeling can complement discriminative classifiers, leading to better performance while preserving efficiency. To this end, we propose Generation-Assisted Discriminative~(GAD) classifier for closed-set action understanding. GAD operates only during fine-tuning, preserving full compatibility with MLLM pretraining. Extensive experiments on temporal action understanding benchmarks demonstrate that GAD improves both accuracy and efficiency over generative methods, achieving state-of-the-art results on four tasks across five datasets, including an average 2.5% accuracy gain and 3x faster inference on our largest COIN benchmark.