RegFormer: Transferable Relational Grounding for Efficient Weakly-Supervised Human-Object Interaction Detection

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weakly supervised human-object interaction (HOI) detection suffers from interference by non-interactive pairs and high computational costs due to the absence of instance-level localization signals. To address this, this work proposes RegFormer, a novel module that introduces a transferable relation localization mechanism. Operating solely with image-level labels, RegFormer leverages a Transformer architecture to model spatial relationships, enabling efficient conversion of image-level inference into precise instance-level predictions without additional training. By integrating local awareness with weak supervision, the method achieves performance on multiple benchmarks that closely approaches fully supervised approaches while substantially reducing computational overhead, thereby demonstrating its efficiency and strong generalization capability.
📝 Abstract
Weakly-supervised Human-Object Interaction (HOI) detection is essential for scalable scene understanding, as it learns interactions from only image-level annotations. Due to the lack of localization signals, prior works typically rely on an external object detector to generate candidate pairs and then infer their interactions through pairwise reasoning. However, this framework often struggles to scale due to the substantial computational cost incurred by enumerating numerous instance pairs. In addition, it suffers from false positives arising from non-interactive combinations, which hinder accurate instance-level HOI reasoning. To address these issues, we introduce Relational Grounding Transformer (RegFormer), a versatile interaction recognition module for efficient and accurate HOI reasoning. Under image-level supervision, RegFormer leverages spatially grounded signals as guidance for the reasoning process and promotes locality-aware interaction learning. By learning localized interaction cues, our module distinguishes humans, objects, and their interactions, enabling direct transfer from image-level interaction reasoning to precise and efficient instance-level reasoning without additional training. Our extensive experiments and analyses demonstrate that RegFormer effectively learns spatial cues for instance-level interaction reasoning, operates with high efficiency, and even achieves performance comparable to fully supervised models. Our code is available at https://github.com/mlvlab/RegFormer.
Problem

Research questions and friction points this paper is trying to address.

Weakly-supervised HOI detection
instance-level reasoning
false positives
computational cost
localization signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

RegFormer
Weakly-supervised HOI detection
Relational Grounding
Spatially grounded signals
Instance-level reasoning
🔎 Similar Papers
No similar papers found.