HOID-R1: Reinforcement Learning for Open-World Human-Object Interaction Detection Reasoning with Multimodal Large Language Model

๐Ÿ“… 2025-08-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing open-vocabulary human-object interaction (HOI) detection methods over-rely on large language modelsโ€™ (LLMs) textual prompting capabilities while neglecting their untapped 3D spatial reasoning potential, thereby limiting interaction understanding accuracy in AR/VR and robotic applications. To address this, we propose the first multimodal reinforcement learning framework that jointly integrates chain-of-thought-guided supervised fine-tuning (SFT) and group-relative policy optimization (GRPO). Our approach innovatively introduces a multi-reward signal mechanism and a โ€œLLM-as-judgeโ€ strategy to effectively suppress hallucination, enhance cross-modal alignment, and improve generalization. The framework performs end-to-end joint optimization of visual, linguistic, and spatial representations. It achieves state-of-the-art performance on standard benchmarks including HICO-DET and V-COCO, and demonstrates significant gains in zero-shot transfer to novel, open-world scenarios.

Technology Category

Application Category

๐Ÿ“ Abstract
Understanding and recognizing human-object interaction (HOI) is a pivotal application in AR/VR and robotics. Recent open-vocabulary HOI detection approaches depend exclusively on large language models for richer textual prompts, neglecting their inherent 3D spatial understanding capabilities. To address this shortcoming, we introduce HOID-R1, the first HOI detection framework that integrates chain-of-thought (CoT) guided supervised fine-tuning (SFT) with group relative policy optimization (GRPO) within a reinforcement learning (RL) paradigm. Specifically, we initially apply SFT to imbue the model with essential reasoning capabilities, forcing the model to articulate its thought process in the output. Subsequently, we integrate GRPO to leverage multi-reward signals for policy optimization, thereby enhancing alignment across diverse modalities. To mitigate hallucinations in the CoT reasoning, we introduce an "MLLM-as-a-judge" mechanism that supervises the CoT outputs, further improving generalization. Extensive experiments show that HOID-R1 achieves state-of-the-art performance on HOI detection benchmarks and outperforms existing methods in open-world generalization to novel scenarios.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D spatial understanding in HOI detection
Integrates reinforcement learning for multimodal alignment
Reduces hallucinations in chain-of-thought reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates CoT-guided SFT with GRPO in RL
Uses MLLM-as-a-judge to supervise CoT outputs
Enhances alignment across diverse modalities
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhenhao Zhang
ShanghaiTech University
H
Hanqing Wang
The Hong Kong University of Science and Technology (GZ), Shanghai AI Lab
X
Xiangyu Zeng
Shanghai AI Lab, Nanjing University
Z
Ziyu Cheng
ShanghaiTech University, University of Wisconsin, Madison
J
Jiaxin Liu
ShanghaiTech University
H
Haoyu Yan
ShanghaiTech University
Z
Zhirui Liu
ShanghaiTech University
Kaiyang Ji
Kaiyang Ji
MS student of Computer Science, Shanghaitech University
Computer VisionGenerative ModelsEmbodied AI
T
Tianxiang Gui
ShanghaiTech University
K
Ke Hu
ShanghaiTech University
K
Kangyi Chen
ShanghaiTech University
Y
Yahao Fan
ShanghaiTech University
Mokai Pan
Mokai Pan
ShanghaiTech University
Machine LearningComputer Vision