HOI-R1: Exploring the Potential of Multimodal Large Language Models for Human-Object Interaction Detection

๐Ÿ“… 2025-10-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing HOI detection methods suffer from heavy reliance on visual priors, architectural complexity, and poor deployability. To address these limitations, this paper proposes a purely text-driven multimodal large language model (MLLM) inference frameworkโ€”the first to integrate reinforcement learning (RL) into HOI detection. We design an end-to-end textual HOI reasoning pipeline and a dedicated HOI Detection (HOID) reward function, enabling interaction recognition directly from natural language descriptions without auxiliary detection modules or visual feature extractors. Evaluated on HICO-DET, our method achieves a twofold improvement in accuracy over strong baselines and demonstrates robust generalization. Key contributions are: (1) pioneering the MLLM+RL paradigm for HOI detection; (2) introducing the first textual RL-based inference framework tailored for HOI; and (3) empirically validating the feasibility of a purely language-based approach for fine-grained visual understanding tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent Human-object interaction detection (HOID) methods highly require prior knowledge from VLMs to enhance the interaction recognition capabilities. The training strategies and model architectures for connecting the knowledge from VLMs to the HOI instance representations from the object detector are challenging, and the whole framework is complex for further development or application. On the other hand, the inherent reasoning abilities of MLLMs on human-object interaction detection are under-explored. Inspired by the recent success of training MLLMs with reinforcement learning (RL) methods, we propose HOI-R1 and first explore the potential of the language model on the HOID task without any additional detection modules. We introduce an HOI reasoning process and HOID reward functions to solve the HOID task by pure text. The results on the HICO-DET dataset show that HOI-R1 achieves 2x the accuracy of the baseline with great generalization ability. The source code is available at https://github.com/cjw2021/HOI-R1.
Problem

Research questions and friction points this paper is trying to address.

Explores MLLMs' inherent reasoning for human-object interaction detection
Eliminates need for additional object detection modules
Solves HOID task through pure text-based reasoning process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for MLLM training
Introduces HOI reasoning process and reward functions
Solves HOID task through pure text generation
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Junwen Chen
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
P
Peilin Xiong
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
Keiji Yanai
Keiji Yanai
Professor, Department of Informatics, The University of Electro-Communications, Tokyo
Computer VisionDeep LearningImage Synthesis and Editing